I have a greyscale image, with a white background (not perfect white, more a mixed cream), and images on it, which are surrounded by a white border, then a black border (can be any thickness).
I need to be able to detect and highlight the images on this background. The image is in raw format, I can load an display it, and store it in "image".
Please, any ideas?!!!
Then created a nested loop so you loop through every value of image[x][y], and if the colour is not white or almost white, highlight it my.. making it darker?
Thanks, but I have tried a similar method, and it messes up because the image goes "white background", "black border", then "white border", then picture. (I.e. -- Pictures have a black and white border).
I am trying a few variations on this method, however, so thanks again!
(Btw, sorry if my explanation of the problem is rubbish, please ask if there is anything needing clarification):D
This isn't your average newbie C++ question
How about here for an overview of the problem
Indeed, and thanks for the links, I feel I am getting there now.:D
It sounds like the borders are well-defined, and I assume that the borders are rectangular. So you probably should scan for the borders first. (Assuming that you don't already know where the borders are.)
Once you know the coordinates for the borders, you can manipulate the pixels inside the borders.
I think you have to first determine what will be the cut-off point for dark verses light. What level grayscale is it: 4, 16, 256 colors?
Once you draw the line in the sand as to what is dark and what is light then you can start your analysis. I suggest that you take an average of the darkness of the pixels around a pixel so that you can get a true sense of the darkness of the pixel.
Say that there is 4 color grayscale with 0 being black and 3 being white. The following is 3x3 section of pixels from the whole image.
Say you want to determine if the pixel in the middle is light or dark. Well, it is 0 so it is dark but if you average the values of all the pixels in the region you will find that it is lighter than darker. If you were to look at the picture, this area would probably look like a lighter spot rather than a darker spot.
Keep in mind that image recognition is mostly about finding edges and this is best done with averages of regions rather than on a pixel by pixel basis.
This is only a small piece of the total solution, but I hope you can employ this to create a more fault tolerant algorithm.