Terror's Server
Fraud, gruesome propaganda, terror planning: the Net enables it all. The online industry can help fix it.
Two hundred two people died in the Bali, Indonesia, disco bombing of October 12, 2002, when a suicide bomber blew himself up on a tourist-bar dance floor, and then, moments later, a second bomber detonated an explosives-filled Mitsubishi van parked outside. Now, the mastermind of the attacks – Imam Samudra, a 35-year-old Islamist militant with links to al-Qaeda – has written a jailhouse memoir that offers a primer on the more sophisticated crime of online credit card fraud, which it promotes as a way for Muslim radicals to fund their activities.
Law enforcement authorities say evidence collected from Samudra’s laptop computer shows he tried to finance the Bali bombing by committing acts of fraud over the Internet. And his new writings suggest that online fraud – which in 2003 cost credit card companies and banks $1.2 billion in the United States alone – might become a key weapon in terrorist arsenals, if it’s not already. “We know that terrorist groups throughout the world have financed themselves through crime,” says Richard Clarke, the former U.S. counterterrorism czar for President Bush and President Clinton. “There is beginning to be a reason to conclude that one of the ways they are financing themselves is through cyber-crime.”
Online fraud would thereby join the other major ways in which terrorist groups exploit the Internet. The September 11 plotters are known to have used the Internet for international communications and information gathering. Hundreds of jihadist websites are used for propaganda and fund-raising purposes and are as easily accessible as the mainstream websites of major news organizations. And in 2004, the Web was awash with raw video of hostage beheadings perpetrated by followers of Abu Musab al-Zarqawi, the Jordanian-born terror leader operating in Iraq. This was no fringe phenomenon. Tens of millions of people downloaded the video files, a kind of vast medieval spectacle enabled by numberless Web hosting companies and Internet service providers, or ISPs. “I don’t know where the line is. But certainly, we have passed it in the abuse of the Internet,” says Gabriel Weimann, a professor of communications at the University of Haifa, who tracks use of the Internet by terrorist groups.
Meeting these myriad challenges will require new technology and, some say, stronger self-regulation by the online industry, if only to ward off the more onerous changes or restrictions that might someday be mandated by legal authorities or by the security demands of business interests. According to Vinton Cerf, a founding father of the Internet who codesigned its protocols, extreme violent content on the Net is “a terribly difficult conundrum to try and resolve in a way that is constructive.” But, he adds, “it does not mean we shouldn’t do anything. The industry has a fair amount of potential input, if it is to try to figure out how on earth to discipline itself. The question is, which parts of the industry can do it?” The roadblocks are myriad, he notes: information can literally come from anywhere, and even if major industry players agree to restrictions, Internet users themselves could obviously go on sharing content. “As always, the difficult question will be, Who decides what is acceptable content and on what basis?”
Some work is already going on in the broader battle against terrorist use of the Internet. Research labs are developing new algorithms aimed at making it easier for investigators to comb through e-mails and chat-room dialogue to uncover criminal plots. Meanwhile, the industry’s anti-spam efforts are providing new tools for authenticating e-mail senders using cryptography and other methods, which will also help to thwart fraud; clearly, terrorist exploitation of the Internet adds a national-security dimension to these efforts. The question going forward is whether the terrorist use of the medium, and the emerging responses, will help usher in an era in which the distribution of online content is more tightly controlled and tracked, for better or worse.
The Rise of Internet Terror
Today, most experts agree that the Internet is not just a tool of terrorist organizations, but is central to their operations*. Some say that al-Qaeda’s online presence has become more potent and pertinent than its actual physical presence since the September 11 attacks. “When we say al-Qaeda is a global ideology, this is where it exists – on the Internet,” says Michael Doran, a Near East scholar and terrorism expert at Princeton University. “That, in itself, I find absolutely amazing. Just a few years ago, an organization like this would have been more cultlike in nature. It wouldn’t be able to spread around the world the way it does with the Internet.”
The universe of terror-related websites extends far beyond al-Qaeda, of course. According to Weimann, the number of such websites has leapt from only 12 in 1997 to around 4,300 today. (This includes sites operated by groups like Hamas and Hezbollah, and others in South America and other parts of the world.) “In seven years it has exploded, and I am quite sure the number will grow next week and the week after,” says Weimann, who described the trend in his report “How Modern Terrorism Uses the Internet,” published by the United States Institute of Peace, and who is now at work on a book, Terrorism and the Internet, due out later this year.
These sites serve as a means to recruit members, solicit funds, and promote and spread ideology. “While the [common] perception is that [terrorists] are not well educated or very sophisticated about telecommunications or the Internet, we know that that isn’t true,” says Ronald Dick, a former FBI deputy assistant director who headed the FBI’s National Infrastructure Protection Center. “The individuals that the FBI and other law enforcement agencies have arrested have engineering and telecommunications backgrounds; they have been trained in academic institutes as to what these capabilities are.” (Militant Islam, despite its roots in puritanical Wahhabism, taps the well of Western liberal education: Khalid Sheikh Mohammed, the principal September 11 mastermind, was educated in the U.S. in mechanical engineering; Osama bin Laden’s deputy Ayman al-Zawahiri was trained in Egypt as a surgeon.)
The Web gives jihad a public face. But on a less visible level, the Internet provides the means for extremist groups to surreptitiously organize attacks and gather information. The September 11 hijackers used conventional tools like chat rooms and e-mail to communicate and used the Web to gather basic information on targets, says Philip Zelikow, a historian at the University of Virginia and the former executive director of the 9/11 Commission. “The conspirators used the Internet, usually with coded messages, as an important medium for international communication,” he says. (Some aspects of the terrorists’ Internet use remain classified; for example, when asked whether the Internet played a role in recruitment of the hijackers, Zelikow said he could not comment.)
Finally, terrorists are learning that they can distribute images of atrocities with the help of the Web. In 2002, the Web facilitated wide dissemination of videos showing the beheading of Wall Street Journal reporter Daniel Pearl, despite FBI requests that websites not post them. Then, in 2004, Zarqawi made the gruesome tactic a cornerstone of his terror strategy, starting with the murder of the American civilian contractor Nicholas Berg – which law enforcement agents believe was carried out by Zarqawi himself. From Zarqawi’s perspective, the campaign was a rousing success. Images of orange-clad hostages became a headline-news staple around the world – and the full, raw videos of their murders spread rapidly around the Web. “The Internet allows a small group to publicize such horrific and gruesome acts in seconds, for very little or no cost, worldwide, to huge audiences, in the most powerful way,” says Weimann.
And there’s a large market for such material. According to Dan Klinker, webmaster of a leading online gore site, Ogrish.com, consumption of such material is brisk. Klinker, who says he operates from offices in Western and Eastern Europe and New York City, says his aim is to “open people’s eyes and make them aware of reality.” It’s clear that many eyes have taken in these images thanks to sites like his. Each beheading video has been downloaded from Klinker’s site several million times, he says, and the Berg video tops the list at 15 million. “During certain events (beheadings, etc.) the servers can barely handle the insane bandwidths – sometimes 50,000 to 60,000 visitors an hour,” Klinker says.
Avoiding the Slippery Slope
To be sure, Internet users who want to block objectionable content can purchase a variety of filtering-software products that attempt to block sexual or violent content. But they are far from perfect. And though a hodgepodge of Web page rating schemes are in various stages of implementation, no universal rating system is in effect – and none is mandated – that would make filters chosen by consumers more effective.
But passing laws aimed at allowing tighter filtering – to say nothing of actually mandating filtering – is problematical. Laws aimed at blocking minors’ access to pornography, like the Communications Decency Act and Children’s Online Protection Act, have been struck down in the courts on First Amendment grounds, and the same fate has befallen some state laws, often for good reason: the filtering tools sometimes throw out the good with the bad. “For better or worse, the courts are more concerned about protecting the First Amendment rights of adults than protecting children from harmful material,” says Ian Ballon, an expert on cyberspace law and a partner at Manatt, Phelps, and Phillips in Palo Alto, CA. Pornography access, he says, “is something the courts have been more comfortable regulating in the physical world than on the Internet.” The same challenges pertain to images of extreme violence, he adds.
The Federal Communications Commission enforces “decency” on the nation’s airwaves as part of its decades-old mission of licensing and regulating television and radio stations. Internet content, by contrast, is essentially unregulated. And so, in 2004, as millions of people watched video of beheadings on their computers, the FCC fined CBS $550,000 for broadcasting the exposure of singer Janet Jackson’s breast during the Super Bowl halftime show on television.
“While not flatly impossible, [Internet content] regulation is hampered by the variety of places around the world at which it can be hosted,” says Jonathan Zittrain, codirector of the Berkman Center for Internet and Society at Harvard Law School – and that’s to say nothing of First Amendment concerns. As Zittrain sees it, “it’s a gift that the sites are up there, because it gives us an opportunity for counterintelligence.”
As a deterrent, criminal prosecution has also had limited success. Even when those suspected of providing Internet-based assistance to terror cells are in the United States, obtaining convictions can be difficult. Early last year, under provisions of the Patriot Act, the U.S. Department of Justice charged Sami Omar al-Hussayen, a student at the University of Idaho, with using the Internet to aid terrorists. The government alleged that al-Hussayen maintained websites that promoted jihadist-related activities, including funding terrorists. But his defense argued that he was simply using his skills to promote Islam and wasn’t responsible for the sites’ radical content. The judge reminded the jury that, in any case, the Constitution protects most speech. The jury cleared al-Hussayen on the terrorism charges but deadlocked on visa-related charges; al-Hussayen agreed to return home to his native Saudi Arabia rather than face a retrial on the visa counts.
Technology and ISPs
But the government and private-sector strategy for combatting terrorist use of the Internet has several facets. Certainly, agencies like the FBI and the National Security Agency – and a variety of watchdog groups, such as the Site Institute, a nonprofit organization based in an East Coast location that it asked not be publicized – closely monitor jihadist and other terrorist sites to keep abreast of their public statements and internal communications, to the extent possible.
It’s a massive, needle-in-a-haystack job, but it can yield a steady stream of intelligence tidbits and warnings. For example, the Site Institute recently discovered, on a forum called the Jihadi Message Board, an Arabic translation of a U.S. Air Force Web page that mentioned an American airman of Lebanese descent. According to Rita Katz, executive director of the Site Institute, the jihadist page added, in Arabic, “This hypocrite will be going to Iraq in September of this year [2004] – I pray to Allah that his cunning leads to his slaughter. I hope that he will be slaughtered the Zarqawi’s way, and then [go from there] to the lowest point in Hell.” The Site Institute alerted the military. Today, on one if its office walls hangs a plaque offering the thanks of the Air Force Office of Special Investigations.
New technology may also give intelligence agencies the tools to sift through online communications and discover terrorist plots. For example, research suggests that people with nefarious intent tend to exhibit distinct patterns in their use of e-mails or online forums like chat rooms. Whereas most people establish a wide variety of contacts over time, those engaged in plotting a crime tend to keep in touch only with a very tight circle of people, says William Wallace, an operations researcher at Rensselaer Polytechnic Institute.
This phenomenon is quite predictable. “Very few groups of people communicate repeatedly only among themselves,” says Wallace. “It’s very rare; they don’t trust people outside the group to communicate. When 80 percent of communications is within a regular group, this is where we think we will find the groups who are planning activities that are malicious.” Of course, not all such groups will prove to be malicious; the odd high-school reunion will crop up. But Wallace’s group is developing an algorithm that will narrow down the field of so-called social networks to those that warrant the scrutiny of intelligence officials. The algorithm is scheduled for completion and delivery to intelligence agencies this summer.
And of course, the wider fight against spam and online fraud continues apace. One of the greatest challenges facing anti-fraud forces is the ease with which con artists can doctor their e-mails so that they appear to come from known and trusted sources, such as colleagues or banks. In a scam known as “phishing,” this tactic can trick recipients into revealing bank account numbers and passwords. Preventing such scams, according to Clarke, “is relevant to counterterrorism because it would prevent a lot of cyber-crime, which may be how [terrorists] are funding themselves. It may also make it difficult to assume identities for one-time-use communications.”
New e-mail authentication methods may offer a line of defense. Last fall, AOL endorsed a Microsoft-designed system called Sender ID that closes certain security loopholes and matches the IP (Internet Protocol) address of the server sending an inbound e-mail against a list of servers authorized to send mail from the message’s purported source. Yahoo, the world’s largest e-mail provider with some 40 million accounts, is now rolling out its own system, called Domain Keys, which tags each outgoing e-mail message with an encrypted signature that can be used by the recipient to verify that the message came from the purported domain. Google is using the technology with its Gmail accounts, and other big ISPs, including Earthlink, are following suit.
Finally, the bigger ISPs are stepping in with their own reactive efforts. Their “terms of service” are usually broad enough to allow them the latitude to pull down objectionable sites when asked to do so. “When you are talking about an online community, the power comes from the individual,” says Mary Osako, Yahoo’s director of communications. “We encourage our users to send [any concerns about questionable] content to us – and we take action on every report.”
Too Little, or Too Much
But most legal, policy, and security experts agree that these efforts, taken together, still don’t amount to a real solution. The new anti-spam initiatives represent only the latest phase of an ongoing battle. “The first step is, the industry has to realize there is a problem that is bigger than they want to admit,” says Peter Neumann, a computer scientist at SRI International, a nonprofit research institute in Menlo Park, CA. “There’s a huge culture change that’s needed here to create trustworthy systems. At the moment we don’t have anything I would call a trustworthy system.” Even efforts to use cryptography to confirm the authenticity of e-mail senders, he says, are a mere palliative. “There are still lots of problems” with online security, says Neumann. “Look at it as a very large iceberg. This shaves off one-fourth of a percent, maybe 2 percent – but it’s a little bit off the top.”
But if it’s true that existing responses are insufficient to address the problem, it may also be true that we’re at risk of an overreaction. If concrete links between online fraud and terrorist attacks begin emerging, governments could decide that the Internet needs more oversight and create new regulatory structures. “The ISPs could solve most of the spam and phishing problems if made to do so by the FCC,” notes Clarke. Even if the Bali bomber’s writings don’t create such a reaction, something else might. If no discovery of a a strong connection between online fraud and terrorism is made, another trigger could be an actual act of “cyberterrorism” – the long-feared use of the Internet to wage digital attacks against targets like city power grids and air traffic control or communications systems. It could be some online display of homicide so appalling that it spawns a new drive for online decency, one countenanced by a newly conservative Supreme Court. Terrorism aside, the trigger could be a pure business decision, one aimed at making the Internet more transparent and more secure.
Zittrain concurs with Neumann but also predicts an impending overreaction. Terrorism or no terrorism, he sees a convergence of security, legal, and business trends that will force the Internet to change, and not necessarily for the better. “Collectively speaking, there are going to be technological changes to how the Internet functions – driven either by the law or by collective action. If you look at what they are doing about spam, it has this shape to it,” Zittrain says. And while technological change might improve online security, he says, “it will make the Internet less flexible. If it’s no longer possible for two guys in a garage to write and distribute killer-app code without clearing it first with entrenched interests, we stand to lose the very processes that gave us the Web browser, instant messaging, Linux, and e-mail.”
A concerted push toward tighter controls is not yet evident. But if extremely violent content or terrorist use of the Internet might someday spur such a push, a chance for preëmptive action may lie with ISPs and Web hosting companies. Their efforts need not be limited to fighting spam and fraud. With respect to the content they publish, Web hosting companies could act more like their older cousins, the television broadcasters and newspaper and magazine editors, and exercise a little editorial judgment, simply by enforcing existing terms of service.
Is Web content already subject to any such editorial judgment? Generally not, but sometimes, the hopeful eye can discern what appear to be its consequences. Consider the mysterious inconsistency among the results returned when you enter the word “beheading” into the major search engines. On Google and MSN, the top returns are a mixed bag of links to responsible news accounts, historical information, and ghoulish sites that offer raw video with teasers like “World of Death, Iraq beheading videos, death photos, suicides and crime scenes.” Clearly, such results are the product of algorithms geared to finding the most popular, relevant, and well-linked sites.
But enter the same search term at Yahoo, and the top returns are profiles of the U.S. and British victims of beheading in Iraq. The first 10 results include links to biographies of Eugene Armstrong, Jack Hensley, Kenneth Bigley, Nicholas Berg, Paul Johnson, and Daniel Pearl, as well as to memorial websites. You have to load the second page of search results to find a link to Ogrish.com. Is this oddly tactful ordering the aberrant result of an algorithm as pitiless as the ones that churn up gore links elsewhere? Or is Yahoo, perhaps in a nod to the victims’ memories and their families’ feelings, making an exception of the words “behead” and “beheading,” treating them differently than it does thematically comparable words like “killing” and “stabbing?”
Yahoo’s Osako did not reply to questions about this search-return oddity; certainly, a technological explanation cannot be excluded. But it’s clear that such questions are very sensitive for an industry that has, to date, enjoyed little intervention or regulation. In its response to complaints, says Richard Clarke, “the industry is very willing to coöperate and be good citizens in order to stave off regulation.” Whether it goes further and adopts a stricter editorial posture, he adds, “is a decision for the ISP [and Web hosting company] to make as a matter of good taste and as a matter of supporting the U.S. in the global war on terror.” If such decisions evolve into the industrywide assumption of a more journalistic role, they could, in the end, be the surest route to a more responsible medium – one that is less easy to exploit and not so vulnerable to a clampdown.
David Talbot is Technology Review’s chief correspondent.