Business Report

Why We’re So Vulnerable

An expert in U.S. national cybersecurity research and policy says the next generation of technology must have security built in from the very start.

In an age of continuing electronic breaches and rising geopolitical tensions over cyber-espionage, the White House is working on a national cybersecurity strategy that’s expected in early 2016. Helping to draft that strategy is Greg Shannon. He was until recently chief scientist at Carnegie Mellon University’s Software Engineering Institute and is now on leave to serve as assistant director for cybersecurity strategy at the White House Office of Science and Technology Policy.

In an interview with MIT Technology Review senior writer David Talbot, Shannon explained that dealing with today’s frequent breaches and espionage threats—which have affected federal agencies as well as businesses and individuals—requires fundamentally new approaches to creating all kinds of software. Fixing the infrastructure for good may take two decades.

Cybersecurity has long been a serious worry. Have recent events really changed the game?

If you just consider the attack on Sony—it was a watershed event. The scale, scope, and cost were enormous. And it revealed how tightly cybersecurity and our economy are interrelated—and that the health of the economy is now potentially at stake.

Greg Shannon

Why are huge breaches like these happening? Are the billions of dollars spent on new security technologies in recent years not working?

It’s more that the incentives to wage malicious cyber activities keep skyrocketing. In the early years of the Internet, the improved efficiencies from networked IT infrastructure far outweighed the security risks created by this infrastructure. Threats were always there, but it was okay to use patches. Today what’s available online, and its value, keep increasing exponentially—and so do the incentives to exploit systems and steal data. What we are seeing are the results; absolutely, the threats and the attacks are bigger than they’ve ever been. And this hasn’t been foremost in the mind-set of most companies producing software infrastructure or Internet services.

What is the underlying technology problem?

The answer might sound abstract and dry, but it has to do with efficacy and efficiency. On efficacy, how do you know that installing a new security technology is better than doing nothing? You often don’t. And on efficiency, the usual approach is that you fix a newly discovered problem so the adversary doesn’t use that method anymore. But at the end of the day this doesn’t achieve much, because it doesn’t create a general, systemic solution. It’s not efficient.

We need to restructure how we build software, and develop security systems that have evidence that they actually add value. This requires rigor in how the billions of lines of code that run our networked infrastructure are actually written and updated.

The only places where software writing is truly rigorous are places like NASA—where they are building code that must work for years and from millions of miles away. They have highly formal methods and use well-controlled tools and special engineering to make absolutely sure that the software is reliable and bug-free.

How can we make all IT infrastructure as great as the code running a Martian probe?

Many colleagues and I are devoted to this question. First, it’s important to understand that there are a number of nontechnical issues that keep everyday software from being anywhere near that good. There aren’t regulations or consequences that software companies experience if there are problems down the road—with the exception of certain high-priority domains like nuclear power plants or air traffic control.

So on the policy side you need to consider incentives for everybody to write better code—it could be because of liability, regulations, or market mechanisms. And on the technology side you need to create market incentives so rigorous software development methods, like the ones NASA uses, become far more efficient and easier for everyone to use. Congress, in the 2014 Cyber Security Enhancement Act, asked for a federal cybersecurity R&D strategic plan, and that plan is being drafted, for release by early 2016.

And while it will always be true that malicious insiders or human error can create problems, great software can to a large extent deal with that, too, by creating clear access rules and sending alerts when anything anomalous happens.

Meanwhile, what can companies do to protect themselves?

Every company, from the smallest to largest, should use best practices, taking into account each company’s particular assets, threats, and cybersecurity capabilities. To be sure, many systems are inherently weak. Most systems have millions of lines of code, and the typical rate for a software bug is one per 1,000 lines of code. Even if one out of a hundred of these bugs winds up creating a security vulnerability, that’s a density you can’t really keep up with. But if companies follow best practices, they can become much better protected—and eventually avoid more [hacks like the one on] Sony.

We aren’t getting NASA-level software, but is anyone doing it right?

One simple measure that is clearly critically necessary is that products need a way to have regular and secure software updates. One can argue that companies such as Tesla and Google and Apple—and, to a large extent, Microsoft—are doing that. Google Chrome updates happen in the background; it doesn’t even ask you for permission anymore.

The Apple iOS infrastructure does a good job of not requiring everyday app developers to worry about many, but not all, security issues. With Tesla, updates can happen when you charge the car.

What’s the biggest opportunity right now to shape a more secure future?

The emergence of an Internet of things—interconnecting billions of devices—provides an opportunity to do things correctly from the start. Networked devices in cars and homes, and wearable devices, could introduce a multitude of new attack vectors, but if we get things right with these devices and cloud-based technologies, we can make sure the next generation of technology will have security built in.

How long until the efforts you’ve been talking about will make our networked infrastructure able to withstand the heightened incentives to attack it?

For the most critical components in areas like the electric grid and large industrial systems, five to 10 years is feasible. To be pervasive it will take 20 or more years.