So here I am, stuck overnight at work and whiling away the wee hours on another crazy coding spree.
Anyways, I am attempting to reason with the following:-
I have 32 8-bit greyscale images which may/may not be similar;
I want to have 16 images that best represent the original 32.
Select the 16 most divergent images (the ones that are the least like the others);
Cluster the other 16 around these, combining the clustered images together.
But what's a good metric to determine how "like" two two-dimensional arrays are?
At the moment I'm doing abs(img1[n] - img2[n]) yes, I'm hopeless. This can't deal with for example, comparing images that are complements (inversions) of each other, they would be considered identical. I need something that takes into account the position of each pixel within the image, I think.
I have heard about chi square distribution being used to compare frames of video, but I'm not a math geek and would appreciate someone putting it through the blender for me, if that is what I need.