Intelligent Machines

Technology Lessons from the Wikileaks Saga

The government can do little to stop digital leaks–but it could do a better job tracking the source.

There’s already debate about whether Wikileaks’s release of 92,000 classified documents on the war in Afghanistan was more of a milestone in the annals of national security and the press than the 1971 leak of the Pentagon Papers, the U.S. government’s classified report on the Vietnam War. It’s clear enough, though, that today’s technological landscape severely limits the government’s range of options to fight back against such leaks, but also provides a range of tools to protect against future ones.

Confronted with the release of classified documents on the Wikileaks website–followed by nearly as many news reports on the phenomenon–the U.S. government’s reaction was calculatedly mild. Government officials appear to have concluded that there was no way to put the genie back in the bottle. Instead of raging against the storm, the government emphasized its displeasure while suggesting there wasn’t much to see there–the documents didn’t say much that the public didn’t already know. And for the many military field reports contained within, the Pentagon intimated that the reports weren’t taken as containing instant, literal truth at the time they were lodged.

In the Pentagon Papers case, the executive branch was not as resigned–and the newspapers effectuating the leak, while fiercely defending their rights to do it, understood that the courts had a role in resolving the conflict. An injunction was briefly issued preventing the Times from publishing more, and the Times respected the order, implicitly allowing that it was for the U.S. government–in the form of the judiciary–to decide what counted as secrets so grave that they could not be published. (Of course, during that period the Washington Post took up the mantle of publishing from the Pentagon Papers, and was then drawn into the litigation.)

Wikileaks is not only more difficult to sue in practice, with its principals outside U.S. borders and its servers located in Sweden, but also in theory: its founder seems uninterested in what the U.S. courts might have to say about its activities. The site is surprisingly Web 1.0ish. Despite the tip of the hat to wikis, where anyone can edit, its fame lies in simply having acquired the documents and then publishing them, using technologies and server configurations unchanged in the past 10 or 15 years. But the Internet around it has changed, and a 1971 approach to dealing with the leak would likely have been doomed: once the information was out there, more and more sites could mirror it, or it could find its way to peer-to-peer networks. That Wikileaks says it does not publish everything it encounters is a small silver lining: should its servers be shut down through, say, a Swedish court order or even a hostile cyber- or other attack, future leaks might go straight to the peer-to-peer nets, with no mediating presence to whom to appeal. (Wikileaks says it held some documents back to minimize “collateral damage.”)

What sort of damage might that be? With military field reports, there could be easy ways of identifying sources of intelligence, who might then face reprisal for having cooperated with U.S. authorities. In that sense, the U.S. government is right in observing that the documents might not be very interesting, but their release could indeed damage national security–and the security of people who tried to help. That’s the worst of both worlds. It’s too easily lost in the current analyses that even in a democracy there can exist information that should genuinely remain secret.

What, then, should the U.S. government do over the longer term to deal with the problem? One response would be to make it easier to identify and prosecute the leakers. Digital documents can move around easily, and even classified documents are typically more useful the more people there are within government who can see them. Instead of further restricting distribution–exacerbating a problem with intelligence sharing–there could be more assiduous on-the-fly watermarking. One official’s view of a document may be subtly, unnoticeably different than everyone else’s, in a way that doesn’t change its use and meaning. Should the document be leaked, its very existence could then help identify whose particular copy was the upstream source. This is a medium term technology fix–steganographic watermarking–and one that would put whistle-blowers in the position of preparing to accept the consequences that could flow from being known as the leaker. (Even here, the sheer volume of documents, along with examining what wasn’t released, likely puts the authorities in a good position to triangulate on who shared them.)


More important, there appears to be consensus that too much is classified, and for too long. As those labels proliferate for material whose release wouldn’t threaten national security, it can make even the most assiduous handler of classified information see less why the distinctions matter. Worse, leaking is often an instrument of official government policy. This doesn’t always entail classified information, but the more it becomes common practice, approved or initiated at the highest levels, to leak information rather than to out it directly, the harder it is to make the general ethical case that secrets are meant to be kept. Put together a culture of excessive classification with one of strategic, “approved” leaking and once again there’s the worst of both worlds.

This suggests that classification might be more effective if applied as the exception rather than the rule, and only to small, specifically enumerated classes of information whose release could impact the national security in highly specific ways: intelligence sources (like who told the government a secret), intelligence methods (like how the government is able to quietly surveil an enemy), and nuclear bomb secrets. (Even the latter category appears overprotected, given the amount of information already in the public domain. The biggest bottleneck against nuclear proliferation may lie in the physical material rather than in know-how.)

Finally: a lesson for any group or institution wanting to keep a major, identity-defining secret: the distance between the face one presents to the world, and the face presented inward to oneself, cannot no longer become too great. Like water finding its level, the inward face will become known. The truth, or at least more of its constituent parts, will out. In the big picture, that’s a good thing. Those who have the most to fear from an open environment are ones with closed agendas, for whom public debate is a threat rather than an opportunity. Long-term strength lies in persuasion grounded in fact, rather than on carefully constructed artifice.

Jonathan Zittrain is an Internet law professor at Harvard Law School and a cofounder and faculty codirector of Harvard’s Berkman Center for Internet & Society.