I'm engaging in a spot of signal processing on an image. I'm trying to sample to the Hue, Saturation and Luminance (HSL) of each pixel in the image, determine which hue/saturation is most common and then copy that to each pixel in order to make the entire image look like various shades of one colour (retaining details as I'm not changing the L for each pixel). This is like the "Colorize" effect in Paint Shop Pro if you've seen that.
What I'm currently doing at the moment is:-
- Convert each RGB pixel into HSL
- Put the H and S components into a binary tree ordered by H, incrementing count if a particular H/S combination already exists
- Afterwards retrieve the tree node with the highest count and use it to "paint" all the pixels
While this approach does work in an ideal case, in testing I've noticed images that are a mix of what appear to be shades of gray becoming red or blue after processing. I've tracked this down to the following:-
H, S and L are all floating-point values stored in doubles. Although there are many H/S combinations that are very similar, very few stand up to my primitive "does input H/S == tree node H/S?" comparison, exact matching. This results in a very small number of e.g. blacks which have identical red/blue hue/saturation values having the "highest" count and being used to paint the rest.
Is there a statistical method I could use to try to group similar values together, so in being less precise I have a more accurate sample?