Here's the latest results from my early vision system. The upper left is the raw image from the webcam, the lower left is the grey scale translation. The upper right is my custom edge detector.
Notice how the lower right image filters out quite a bit of the natural shapes, leaving the tank as a more dominant set of features (also the motherboard in the background).
Are your algorithms based on already existing one? Care to elaborate a bit about your method?
I originally tried sobel, but that didn't work very well and took too much CPU time. The new method is as far as I know original, but I haven't done a comprehensive search of existing research so someone else may have thought of it first.
The second stage, which produces the lower right is completely original research. I found it using a research technique I like to call 'trying random ........ until you find something interesting'.