A View from Vivek Wadhwa
Laws and Ethics Can’t Keep Pace with Technology
Codes we live by, laws we follow, and computers that move too fast to care.
Employers can get into legal trouble if they ask interviewees about their religion, sexual preference, or political affiliation. Yet they can use social media to filter out job applicants based on their beliefs, looks, and habits. Laws forbid lenders from discriminating on the basis of race, gender, and sexuality. Yet they can refuse to give a loan to people whose Facebook friends have bad payment histories, if their work histories on LinkedIn don’t match their bios on Facebook, or if a computer algorithm judges them to be socially undesirable.
These regulatory gaps exist because laws have not kept up with advances in technology. The gaps are getting wider as technology advances ever more rapidly. And it’s not just in employment and lending—the same is happening in every domain that technology touches.
“That is how it must be, because law is, at its best and most legitimate—in the words of Gandhi—‘codified ethics,’ ” says Preeta Bansal, a former general counsel in the White House. She explains that effective laws and standards of ethics are guidelines accepted by members of a society, and that these require the development of a social consensus.
Take the development of copyright laws, which followed the creation of the printing press. When first introduced in the 1400s, the printing press was disruptive to political and religious elites because it allowed knowledge to spread and experiments to be shared. It helped spur the decline of the Holy Roman Empire, through the spread of Protestant writings; the rise of nationalism and nation-states, due to rising cultural self-awareness; and eventually the Renaissance. Debates about the ownership of ideas raged for about 300 years before the first statutes were enacted by Great Britain.
Similarly, the steam engine, the mass production of steel, and the building of railroads in the 18th and 19th centuries led to the development of intangible property rights and contract law. These were based on cases involving property over track, tort liability for damage to cattle and employees, and eminent domain (the power of the state to forcibly acquire land for public utility).
Our laws and ethical practices have evolved over centuries. Today, technology is on an exponential curve and is touching practically everyone—everywhere. Changes of a magnitude that once took centuries now happen in decades, sometimes in years. Not long ago, Facebook was a dorm-room dating site, mobile phones were for the ultra-rich, drones were multimillion-dollar war machines, and supercomputers were for secret government research. Today, hobbyists can build drones and poor villagers in India access Facebook accounts on smartphones that have more computing power than the Cray 2—a supercomputer that in 1985 cost $17.5 million and weighed 2,500 kilograms. A full human genome sequence, which cost $100 million in 2002, today can be done for $1,000—and might cost less than a cup of coffee by 2020.
We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media. Consider the question of privacy. Our laws date back to the late 19th century, when newspapers first started publishing personal information and Boston lawyer Samuel Warren objected to social gossip published about his family. This led his law partner, future U.S. Supreme Court Justice Louis Brandeis, to write the law review article “The Right of Privacy.” Their idea that there exists a right to be left alone, as there is a right to private property, became, arguably, the most famous law review article ever and laid the foundation of American privacy law.
The gaps in privacy laws have grown exponentially since then.
There is a public outcry today—as there should be—about NSA surveillance, but the breadth of that surveillance pales in comparison to the data that Google, Apple, Facebook, and legions of app developers are collecting. Our smartphones track our movements and habits. Our Web searches reveal our thoughts. With the wearable devices and medical sensors that are being connected to our smartphones, information about our physiology and health is also coming into the public domain. Where do we draw the line on what is legal—and ethical?
Then there is our DNA. Genome testing will soon become as common as blood tests, and it won’t be easy to protect our genomic data. The company 23andMe ran afoul of regulators because it was telling people what diseases they might be predisposed to. The issue was the accuracy of the analysis and what people might do with this information. The bigger question, however, is what businesses do with genomic data. Genetic-testing companies have included contractual clauses that let them use and sell their clients’ genetic information to third parties.
The Genetic Information Nondiscrimination Act of 2008 prohibits the use of genetic information in health insurance and employment. But it provides no protection from discrimination in long-term-care, disability, and life insurance. And it places few limits on commercial use. There are no laws to stop companies from using aggregated genomic data in the same way that lending companies and employers use social-media data, or to prevent marketers from targeting ads at people with genetic defects.
Today, technology can read-out your genome from a few stray cells in less than a day. But we have yet to come to a social consensus on how private medical data can be collected and shared. For the most part, we don’t even know who owns an individual’s DNA information. In the U.S., some states have begun passing laws to say that your DNA data is your property.
We will have similar debates about self-driving cars, drones, and robots. These too will record everything we do and will raise new legal and ethical issues. What happens when a self-driving car has a software failure and hits a pedestrian, or a drone’s camera happens to catch someone skinny-dipping in a pool or taking a shower, or a robot kills a human in self-defense?
Thomas Jefferson said in 1816, “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”
The problem is that the human mind itself can’t keep pace with the advances that computers are enabling.
Vivek Wadhwa is a fellow at Arthur & Toni Rembe Rock Center for Corporate Governance, Stanford University, and holds appointments with Singularity University and Duke’s Pratt School of Engineering.