Thats one of the things I am trying to figure out.
In some cases I have to draw the sprite pixel for pixel using this:
Code:
screenDataPnt[0] = screenDataPnt[0] + (( alpha * (blue - screenDataPnt[0] )) >> 8);
screenDataPnt[1] = screenDataPnt[1] + (( alpha * (green - screenDataPnt[1] )) >> 8);
screenDataPnt[2] = screenDataPnt[2] + (( alpha * (red - screenDataPnt[2] )) >> 8);
It works fine, but when I have a lot of stuff in the screen things start to slow down. Say I have 40 sprites that is 128 x 128. Thats a lot calculations for each frame. So I was hoping that I could put that code over on the graphic card.
Not sure if it even makes sense, im completely blank on this subject.