Hi.
I have a little question: are there people who used CImg library to work with images? Namely, how to convert RGB-file (jpeg in my case) to black&white one?
Hi.
I have a little question: are there people who used CImg library to work with images? Namely, how to convert RGB-file (jpeg in my case) to black&white one?
1. Read the manual - The CImg Library - C++ Template Image Processing Toolkit
2. scroll down to "Color bases"
3. read/write the pixels to remove the hue (I guess) from each pixel value.
The colorspace FAQ for what RGB, HSL etc all mean
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
A brute force approach is to average the RGB's and use the average as the final RGB components. However the correct approach is discussed in the links that have been posted already.
Theres actually a scaling issue for the output to 'look right'. I forget the exact values but its like 60% red 30% green and 10% of the blue.
Guys, thanx a lot.
2 abachler: Listen, can u remember exact values , plz ?
Last edited by Rustik_; 08-18-2009 at 12:27 AM.
I would have posted the coefficients but I forgot them and was too lazy to look them up. Averaging works for most applications but if it's a paint program or something very color dependent then you probably would use the accepted formulas.
A few comments.
1. The OP asked about black and white, and everybody is talking about grayscale. To get black and white, dithering will be required. There are multiple methods.
2. The coefficients of the Y=F(R,G,B) luminosity equation are calibrated for RGB values which are gamma-corrected to a linear intensity profile. However, most RGB images are not so corrected, and actually represent device-dependent intensities which need gamma correction before the RGB perceptual coefficients can be applied. I would NOT get too hung up on the particular coefficients unless I knew that the image was already perceptually corrected. In fact, I would probably just take the arithmetic average and only use something more complicated if the situation requires.
Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}
Why would a black and white video signal require a U or V channel?
Having said that, Y is perceptual luminosity with a huge serving of caveats. The YUV and YCbCr color spaces share the same definition of Y, given that the underlying signal represents true radiant intensities. As I said in a previous comment, this is almost never the case since most people do not bother to link their device-dependent images to an appropriate ICC Profile, and even if they do, chances are that along the chain of image processing the profile information is lost or corrupted anyway.
Just average the R,G,B components and call it good, unless you have a specific justification to do it otherwise.
Last edited by brewbuck; 08-19-2009 at 10:16 PM.
Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}
2 brewbuck:
So, u mean averaging the R,G,B components is a better way for obtaining b/w image, than working with coefficients?? *confused*
Actually brew, those coefficients are specifically for conversion of a color NTSC signal to a B&W signal. YUV is the colorspace used to general the color channels in a way that is compatible with existing B&W sets. Its a legacy issue now, since most B&W sets are no longer in service.
No, its not better to just average unless speed is a critical issue. Using the coefficients will produce the best results on 99% of the images you process.
Last edited by abachler; 08-20-2009 at 12:57 AM.