I've been trying to come up with code to parse out the Red, Green, and Blue channels from an unsigned long holding a colour.
I'll post my code then discuss my problem, if you can call it that...
Since a colour (unsigned long) can also be set using...Code://For the sake of having something, which would give a heavy-blue colour since RGB actually means BGR unsigned long Colour = RGB(250, 100, 30); unsigned long B = Colour; unsigned long G = B >> 8; //256 unsigned long R = G >> 16; //65536 //Only use the last 8 bits B = B & 255; G = G & 255; R = R & 255; //Used for shading //B = (unsigned long)(B * Ratio); //G = (unsigned long)(G * Ratio); //R = (unsigned long)(R * Ratio); Colour = RGB(B, G, R); //Should be the same colour we previously had, since shading is commented out
...I was under the suspicion that instead of shifting by 8 and 16, I should be shifting by 256 (16^2) and 65536 (16^4). However, this wasn't giving me my desired results (colours turned into grayscale), so after Googling, I found some samples using 8 and 16, with no explanation why.Code:Colour = 0x00FFAA22; //Let's use a hexadecimal number
I tossed it in, and for the longest time, it seemed like it was correct. Lately I noticed some discolourations as well, though. Even with my shading commented out. So I'm guessing that there's something wrong with the way I'm getting at the Red, Green, and Blue channels.
Bit-shifts and binary operators aren't my strong-suit, but I was confident I was using them correctly. Perhaps someone could enlighten me on where I went wrong?