The door to paranoia opens benignly-and early. just think of santa. he knows when you are sleeping. he knows when you’re awake. he knows if you’ve been bad or good, for goodness’ sake. And he knows these things all the time, even though you can’t see him. Millions of kids all over the world happily and wholeheartedly believe in ubiquitous surveillance as a de facto piece of the annual Christmas present-getting machine. Parents just shake their heads in adoring wonder.
But those same parents might be shocked to learn how short the journey is from the pleasant surveillance fantasy of Santa to the freedom-squashing invasion of Big Brother. In the world detailed by George Orwell in the novel 1984, surveillance cameras follow every move a person makes, and the slightest misstep, or apparent misstep, summons the authorities. Now, similarly, police departments, government agencies, banks, merchants, amusement parks, sports arenas, nanny-watching homeowners, swimming-pool operators, and employers are deploying cameras, pattern recognition algorithms, databases of information, and biometric tools that when taken as a whole can be combined into automated surveillance networks able to track just about anyone, just about anywhere.
While none of us is under 24-hour surveillance yet, the writing is on the wall. As Scott McNealy, CEO of Sun Microsystems, starkly told reporters in 1999, “You already have zero privacy. Get over it.” The techno-entrepreneurs who are developing and marketing these tools anticipate good things to come, such as reduced crime rates in urban environments, computer interfaces that will read eye movements and navigate the Web for you, and fingerprint or facial recognition systems and other biometric technologies that guarantee your identity and eliminate the need for passwords, PIN numbers and access cards-even identifying potential terrorists before they can strike.
But privacy advocates paint a far dimmer picture of this same future, accepting its reality while questioning whether it can be managed responsibly. “The technology is developing at the speed of light, but the privacy laws to protect us are back in the Stone Age,” says Barry Steinhardt, associate director of the American Civil Liberties Union, which is among several groups that have tried, so far almost universally unsuccessfully, to introduce legislation aimed at protecting privacy. “We may not end up with an Orwellian society run by malevolent dictators, but it will be a surveillance society where none of the detail of our daily lives will escape notice and where much of that detail will be recorded.”
The Fifth Utility
In many ways, the drama of pervasive surveillance is being played out first in Orwell’s native land, the United Kingdom, which operates more closed-circuit cameras per capita than any other country in the world. This very public surveillance began in 1986 on an industrial estate near the town of King’s Lynn, approximately 100 kilometers north of London. Prior to the installation of three video cameras, a total of 58 crimes had been reported on the estate. None was reported over the next two years. In 1995, buoyed by that success, the government made matching grants available to other cities and towns that wanted to install public surveillance cameras-and things took off from there.
Most of these closed-circuit TV systems are installed in business districts or shopping centers by British Telecommunications, the national phone network, and jointly operated and managed by law enforcement and private industry. In addition, some townships are using BT to hook up video telephony, a technology that allows transmission of video images via telephone lines-but in a monitor-friendly network that provides officials quick and easy remote access to the images. On another front, the U.K. Home Office, the government department responsible for internal affairs in England and Wales, is starting construction of what promises to be the world’s biggest road and vehicle surveillance network, a comprehensive system of cameras, vehicle and driver databases, and microwave and phone-based communications links that will be able to identify and track the movements of vehicles nearly nationwide. All told, the country’s electronic eyes are becoming so prevalent that Stephen Graham of the Centre for Urban Technology at the University of Newcastle upon Tyne has dubbed them a “fifth utility,” joining water, gas, electric and telephones.
The United States and many other parts of the developed world are not far behind in video surveillance. Just look at the cameras looking at you. They’re in ATMs, banks, stores, casinos, lobbies, hallways, desktops, and along highways, main streets and even side streets. And those are the cameras you can see. Companies like All Security Systems of Miami, FL, advertise Clock Cameras, Exit Sign Cameras, Smoke Detector Cameras, and Covert Tie and Button Cams, as well as Nanny Cams and other easily hidden eyes, some of which send video signals wirelessly to a recorder located elsewhere.
But cameras seem relatively benign when compared to new technology being developed and deployed. Until recently, closed-circuit systems have fed video signals to monitors, which human beings had to watch in real time, or sent the images to recording media for storage. Now, however, the job of spotting suspicious people and behavior in this stream of electronic imagery is becoming automatic, with computers programmed with special algorithms for matching video pixel patterns to stored patterns associated with criminals or criminal actions-and the machines themselves passing initial judgment on whether a behavior is normal.
For example, last January at the Super Bowl in Tampa, FL, law enforcement agencies, without announcement, deployed a face recognition system from Viisage Technology of Littleton, MA. Cameras snapped face shots of fans entering the stadium. Computers instantly extracted a minimal set of features from each captured face, a so-called eigenface, and then compared the eigenfaces to those of criminals, stored in a database. The system purportedly found 19 possible matches, although no one was arrested as a result of the test. Less than six months later, in mid-July, Tampa police sparked public protests after deploying a face recognition system from Visionics, of Jersey City, NJ, to scan city sidewalks for suspected criminals and runaways.
And this is just the beginning of the technology being piloted and prototyped to watch you-and judge your behavior. Beginning in 1997, the U.S. Defense Advanced Research Projects Agency (DARPA) funded some 20 projects under a three-year program called Video Surveillance and Monitoring. That effort has just gathered new momentum under a $50 million follow-up program known as Human ID at a Distance. The aim is to determine if it’s feasible to identify specific individuals at distances up to 150 meters.
Under the program, researchers at Carnegie Mellon University in Pittsburgh are investigating whether a remote sensing technique known as “hyperspectral imaging”-a technology typically used by satellites to find minerals or peer through military camouflage-can be adapted for identifying specific human beings by measuring the color spectrum emitted by their skin. Skin absorbs, reflects and emits distinct patterns of color, and those patterns are specific enough to individual people to serve as spectral signatures. Such systems already work. But according to Robert Collins, a computer scientist at Carnegie Mellon’s Robotics Institute, the process currently requires a person to sit stiffly in a chair as a sensor sweeps through hundreds of emitted wavelengths over a period of about five seconds. “Ideally, what will happen is we’ll find some small group of wavelengths that we can use to distinguish people,” explains Collins. That could reduce the scan time to a fraction of a second.
Another approach being developed involves a video-based network of sensors that would automatically measure such characteristics as leg length and waist width to provide, as Collins says, “the measurements you give to a tailor.” The idea here, he says, is that those numbers should be able to serve as a kind of body fingerprint for identifying specific individuals.
There is no shortage of cleverness when it comes to building the surveillance state. At the Georgia Institute of Technology, scientists are developing sensor-riddled “smart floors” that can identify people by the “force profiles” of their walking feet. Meanwhile, Princeton, NJ-based Sarnoff is working toward an antiterrorist technique that uses a special camera to identify individuals from a hundred meters off by the patterns of color, striation and speckles in their irises. This isn’t easy, since the iris and its elements move so quickly relative to a distant camera that the technical task bears some resemblance to “tracking a ballistic missile,” says Norman Winarsky, president of nVention, Sarnoff’s venture technology company. Still, the technology is coming.
Beyond identity is intention-and there are technologies in the works for divining that as well. IBM has introduced a software product called BlueEyes (see “Behind BlueEyes,” TR May 2001) that’s currently in use at retail stores to record customers’ facial expressions and eye movements, tracking the effectiveness of in-store promotions. And psychologist Jeffrey Cohn of Carnegie Mellon’s Robotics Institute and colleagues have been trying to teach machines an even more precise way to detect facial expressions.
From video signals, the Carnegie Mellon system detects and tracks both invariant aspects of a face, such as the distance between the eyes, and transient ones, like skin furrows and smile wrinkles. This raw data is then reclassified as representing elemental actions of the face. Finally, a neural network correlates combinations of these measurable units to actual expressions. While this falls short of robotic detection of human intentions, many facial expressions reflect human emotions, such as fear, happiness or rage, which, in turn, often serve as visible signs of intentions.
Cohn points out that this particular work is just part of the team’s more encompassing “goal of developing computer systems that can detect human activity, recognize the people involved, understand their behavior, and respond appropriately.” In short, the effort could help lead to the kind of ubiquitous surveillance system that can automatically scan collective human activity for signs of anything from heart-attack-inducing Type-A behavior to sexual harassment to daydreaming at the wheel to homicidal rage.
The Good, the Bad and the Well-Intentioned
The list of emerging technological wonders goes on and on, which is why many observers argue it’s no longer a question of whether ubiquitous surveillance will be applied, but under what guidelines it will operate-and to what end.
“Like most powerful technologies, total surveillance will almost certainly bring both good and bad things into life,” says James Wayman, a former National Security Agency contractor who now directs human identification research at San Jose State University in California. Specifically, he notes, it will combine laudable benefits in convenience and public safety with a potentially lamentable erosion of privacy.
These contradictory values often trigger vigorous debate over whether it will all be worth it. The glass-half-full crowd contends that the very infrastructure of surveillance that conjures fears of Big Brother will actually make life easier and safer for most people. Consider the benefits of the “computer-aided drowning detection and prevention” system that Boulogne, France-based Poseidon Technologies has installed in nine swimming pools in France, England, the Netherlands and Canada. In these systems, a collection of overhead and in-pool cameras relentlessly monitors pool activity. The video signals feed into a central processor running a machine perception algorithm that can effectively spot when active nonwater objects, such as swimmers, become still for more than a few seconds. When that happens, a red alarm light flashes at a poolside laptop workstation and lifeguards are alerted via waterproof pagers. Last November, a Poseidon system at the Jean Blanchet Aquatic Center in Ancenis, Loire-Atlantique, France, alerted lifeguards in time to rescue a swimmer on the verge of drowning. Pulled from the water unconscious, the swimmer walked away from a hospital the next day.
Similarly, when cell phones and other mobile gadgetry start coming embedded with Global Positioning System transponders, it will be possible to pinpoint the carrier and quickly come to his or her aid, if necessary. Such transponders are already built into many new cars (see “The Commuter Computer,” TR June 2001). A click of a button or the triggering of an air bag sends a call to a service center, where agents can then direct emergency personnel to the vehicle, even if the occupants are unconscious. A public ubiquitous surveillance system could also enhance safety by noticing, for example, if a car hits you or if large, unauthorized crowds start congregating around an accident or altercation. As with the car rescue systems, a person’s plight could be recognized and help dispatched almost instantly, sort of how air bags are now immediately deployed on impact.
And not many argue about surveillance’s ability to deter crime. Recent British government reports cite closed-circuit TV as a major reason for declining crime rates. After these systems were put in place, the town of Berwick reported that burglaries fell by 69 percent; in Northampton overall crime decreased by 57 percent; and in Glasgow, Scotland, crime slumped by 68 percent. Public reaction in England has been mixed, but many embrace the technology. “I am prepared to exchange a small/negligible amount of privacy loss so I don’t have to be caught up in yet another bomb blast/bomb scare,” wrote one London computer programmer in an online discussion of the technology.
Do the developers of this controversial technology weigh the pros and cons of their creations? Robert Collins of Carnegie Mellon concedes that much of the work that might fall into the surveillance category conjures an Orwellian quease, but he joins a veritable chorus of colleagues who say it’s not their station to be gatekeepers looking out for how the technology ultimately is used. “We who are working on this are not so interested in applying it to surveillance and Big Brother stuff,” Collins says. “We’re making computers that can interact with people better.” Indeed, Collins notes that he and his colleagues are motivated by the notion of “pervasive computing,” in which the techno-environment becomes aware of its human occupants so that computers and other gadgets can adjust to human needs. The way it is now, he says, humans have to accommodate the limitations of machines.
Jonathon Philips, manager of DARPA’s Human ID at a Distance program, puts it another way: “We develop the technology. The policy and how you implement them is not my province.”
So who is watching the gate? Well, the courts are slowly getting involved. A U.S. Supreme Court decision last June determined that in the absence of a search warrant, the government’s use of a thermal imaging device to monitor heat coming off the walls of a suspected marijuana grower’s private residence in Florence, OR, violated the Fourth Amendment prohibition against “unreasonable searches and seizures.” The ruling could have far-reaching consequences for how new, more powerful surveillance technologies can be deployed. Overall, however, the responsibility of surveillance technology management and regulation is up for grabs in the United States, even as the technology proliferates. And so whether society goes Orwellian or not could well hinge on how responsibly the databases, biometric details and all the rest are managed and protected. After all, notes the ACLU’s Steinhardt, it’s a small step from a technological advance to a technology abuse.
Take the fact that the faces of a large portion of the driving population are becoming digitized by motor vehicles agencies and placed into databases, says Steinhardt. It isn’t much of a stretch to extend the system to a Big Brother-like nationwide identification and tracking network. Or consider that the Electoral Commission of Uganda has retained Viisage Technology to implement a “turnkey face recognition system” capable of enrolling 10 million voter registrants within 60 days. By generating a database containing the faceprint of every one of the country’s registered voters-and combining it with algorithms able to scour all 10 million images within six seconds to find a match-the commission hopes to reduce voter registration fraud. But once such a database is compiled, notes John Woodward, a former CIA operations officer who managed spies in several Asian countries and who’s now an analyst with the Rand Corporation, it could be employed for tracking and apprehending known or suspected political foes. Woodward calls that “function creep.”
Function creep is where things get really dicey for privacy advocates. Several grass-roots efforts now under way seek to rein in surveillance technology through more responsible privacy legislation. The Privacy Coalition, a nonpartisan collection of consumer, civil liberties, labor and family-based groups, is trying to get federal and state lawmakers to commit to its “Privacy Pledge,” which contains, among other things, a vow to develop independent oversight of public surveillance technology and limit the collection of personal data. And several organizations, including the AFL-CIO, Communications Workers of America, 9to5, National Association of Working Women and the United Auto Workers, are supporting legislation to restrict electronic monitoring of employees. As Steinhardt declares, “We can’t leave this to systems designers or the marketplace.”
In spite of these broad efforts, a number of factors, not the least of which is disagreement in Washington about what form such legislation should take, are making it difficult to put words into action. Last year Congress debated the Notice of Electronic Monitoring Act, which would have required companies to notify employees if they were being watched. Although that legislation died in committee, it will probably resurface again this year. As far as individual state laws are concerned, only Connecticut requires employers to tell employees if they are being monitored.
Which leads to the question of what exactly constitutes “private” activity. As former spymaster Woodward observes, a total-surveillance society will not actually expose individuals that much more than ordinary public circulation does now. “Once you leave your house and enter public spaces,” he says, “just about everyone you can see can see you right back.” In other words, you do not walk around most of the day with an expectation of privacy. Your face is not private, so if a camera sees you, it’s no big deal. What’s more, asks Woodward, even if rich and powerful entities, such as the government or megacorporations, had sole access to a system capable of watching everyone all of the time, why would they bother? “The bottom line is that most of us are very boring. We flatter ourselves to think that someone is building a multibillion-dollar system to watch us,” he says.
Even if public opinion does manage to slow down the deployment of surveillance infrastructure, no one involved in the debate thinks it will stop some form of Big Brother from arriving eventually. In his 1998 book The Transparent Society, which is well known in the privacy advocacy community, science fiction author and technology watcher David Brin argues that society inevitably will have to choose between two versions of ubiquitous surveillance: in one, only the rich and powerful use and control the system to their own advantage; in the second, more democratic future, the watchers can also be watched. Brin concedes that the latter version would mean everybody’s laundry hung out in public view, but the transparency would at least be mutual. Rent a porn video and your wife knows it; but if she drives to your best buddy’s house four times a week while you’re at the office, you’ll know that also.
Whether or not the coming era of total surveillance fits neatly into one of Brin’s scenarios will be determined by a complex equation encompassing technological development and the decisions that local, state and federal governments choose to make. The question largely boils down to this: is privacy a right or a privilege? Most Americans assume it is a right, as in our “right to privacy.” But the truth of the matter is that privacy isn’t guaranteed by the Constitution. It is implied, certainly, but not assured. This subtle difference is being tested right now, within our own neighborhoods and workplaces.