The U.S. has so far been relatively permissive toward AI technologies—and we should keep it that way. It’s the reason so much innovation happens here rather than in the more prohibitory European nations.
The main reason the government hasn’t hampered the industry with regulation is that there’s no overbearing federal agency dedicated strictly to AI. Instead, we have a patchwork of federal and state authorities scrutinizing these technologies. The Federal Trade Commission and the National Highway Traffic Safety Administration, for example, recently hosted a workshop to determine how to oversee automated-car technologies. The Department of Homeland Security has put out reports on potential AI threats to critical infrastructure.
The patchwork approach is imperfect, but it has one big benefit—it constrains the temptation to regulate excessively. Regulators can only apply policies that relate to their specialized knowledge.
But now a growing chorus of academics and commentators want to kill that approach. They’re calling for a whole new regulatory body to control AI technologies. Law professor Frank Pasquale of the University of Maryland has called for a “Federal Search Commission” similar to the FCC to oversee Internet queries. Attorney Matthew Scherer, in Portland, Oregon, advocates a specialized federal AI agency. Law professor Ryan Calo of the University of Washington imagines a “Federal Robotics Commission.”
Such ideas are based on the “precautionary principle”—the idea that an innovation must be decelerated or halted altogether if a regulator determines that the associated risks are too much for society to bear.
Of course, as regulatory scholars have long pointed out, the risk analyses that regulators employ can be inadequate. Imagined or exaggerated risks are weighted far more heavily than real benefits, and society is robbed of life-enriching (and in many cases life-saving) developments. Regulators often can’t resist the urge to extend their own authority or budgets, regardless of the benefits or costs to society. If you give authority to regulate, they will regulate. And once you create a federal agency, it’s incredibly difficult to make it go away.
As AI grows to touch more and more domains of existence, a new federal AI agency could have a worryingly large command over American life. Policymakers would need the patience and humility to discern one AI application from another. The social risks from AI assistants, for example, are different from those posed by predictive policing software and “smart weapons.” But an overly zealous regulatory regime might erroneously lump such applications together, stifling beneficial technologies while dedicating fewer resources to the big problems that really matter.
The threat to our future and well-being that precautionary regulation poses, meanwhile, is considerable. AI technologies are poised to generate life-saving developments in health and transportation while modernizing manufacturing and trade. The projected economic benefits reach the trillions. And on a personal level, AI promises to make our lives more comfortable and simpler.
Policymakers who wish to champion growth should embrace a stance of “permissionless innovation.” Humility, collaboration, and voluntary solutions should trump the outdated “command and control” model of the last century. The age of smart machines needs a new age of smart policy.
Andrea O’Sullivan is a program manager with the Mercatus Center, a free-market-oriented think tank at George Mason University’s Technology Policy Program.