Intelligent Machines
Deeper Vision
Researchers are making big strides toward low-cost systems that mimic human vision to give machines three-dimensional information about their environments. By building hardware that analyzes corresponding chunks of paired live images in parallel – as the human brain is thought to do – Tyzx, a startup in Menlo Park, CA, is making computerized depth perception fast enough that surveillance devices and robotic vehicles can incorporate it.
Creatures with two forward-facing eyes can perceive depth because their left and right eyes see from slightly different perspectives, in which the displacement of nearby objects is greater than that of distant objects. Using this apparent difference, called parallax, the brain swiftly determines the distance to an object. While a machine equipped with a pair of cameras can also use parallax to see in three dimensions, the amount of computation required to find matching pixels had previously made stereo machine vision impractical for most situations.
Tyzx computer vision experts Gaile Gordon and John Woodfill invented an algorithm to speed the process. Rather than trying to find pixels with the same color and brightness, the algorithm seeks out left-right pairs where there is a similar contrast in intensity between one pixel and its surrounding pixels. The researchers then built an integrated circuit that can search many groups of pixels simultaneously. They gave this chip a pair of “eyes,” and now “the image capture and the stereo computation all happen inside one relatively inexpensive, self-contained platform,” says Tyzx CEO Ron Buck.
Among the company’s early customers are federal security agencies – Buck says he can’t reveal which ones – that are using the technology to track suspicious individuals as they move against changing backgrounds such as crowds.