Computers have many different techniques in image recognition. The one I
am going to talk about image recognition through Edge Mapping. Basically
the way it works is the computer captures an image, which is stored as values
for each pixel, or picture element. The computer scans the set of values for
each pixel, and tries to determine any patterns using various formulas. As you
can see in the image below, every picture contains a certain amount of noise.
So one method a computer could accommodate for this is to only register the
extremes. These are the pixel values that over a certain limit, or group pixel
values into different groups. Like this, different variations, or lack of
substance can easily get colored over and wiped away from the picture to
reduce some noise. The reason this must be done is because unlike the human
eye, the computer cannot see through hazy images. Humans can determine
objects even though there is a lot of noise in front of it, like a fog. We know
through our own experiences what an object is even though there is something
distorting our view. Mentally, we filter out the noise and fill in the blanks.
This is essentially what the computer is trying to do. It reduces the noise, and
only focuses on the more predominant features, or the extremes. These
mathematical formulas, such as “threshholding” or “quantizing”, reduce the
noise in the image such as the images below. The first picture shows the actual
image captured by the computer. The second picture shows the image after it
has been filtered. By grouping pixel values, it could concentrate on the more
important features. And finally the third picture shows the image after an
edge-mapping algorithm has been applied to it. Basically, it shows the areas in
the image were there was major contrast in values. This way it gives you an
outline of what the image actually looks like. Now the computer has an image
too look at, study, and make comparisons with.