I've been programming a tool to compile all bitmaps for my games into one huge resource file that is pre-loaded into the game. I really need some type of compression scheme to get these beasties down to a manageable size.

I've attempted RLE or VRLE (variable run length encoding). Essentially it's this. There is no RLE 'token' - if a symbol repeats itself you simply turn RLE on and begin to count how many times it repeats, shut it off when the symbols no longer match, and write out the value.

So:

AAAAABCDDDDC

becomes

AA3BCDD2C

Problem is that RLE works great for palletized textures, but not for pure 32-bit textures. Here's why. In order for symbols to repeat in a 32-bit texture file you either have to have a shade of white or by some odd chance the symbols between colors repeat. I can only think of a couple of instances where this might be true, so I'm not sure that RLE will do much for me.

Let's say we have the following filtered pixels in RGBA format:

0,126,32,100
1,127,31,99
2,125,30,98

As you can see RLE won't do anything for this sequence. Yet this is how most 32-bit images will look in memory. The color values are not that far apart which produces very nice transitions between color shades. I just don't think RLE is the answer here.

Any ideas?

One idea I have is to try to create a palette so to say of colors. This would require iterating through the entire picture and counting the number of unique colors and placing them in a palette table. Then I could write out the palette entries for the actual data. At decode time you would look up the palette number retrieved from the data chunk, and get the actual RGBA color values from the palette chunk. However...I'm not sure this would compress anything. It might even make the file bigger.

I'm open to suggestions.