Intelligent Machines
Giving algorithms a sense of uncertainty could make them more ethical
Algorithms are best at pursuing a single mathematical objective—but humans often want multiple incompatible things.
Algorithms are increasingly being used to make ethical decisions. Perhaps the best example of this is a high-tech take on the ethical dilemma known as the trolley problem: if a self-driving car cannot stop itself from killing one of two pedestrians, how should the car’s control software choose who live and who dies?
In reality, this conundrum isn’t a very realistic depiction of how self-driving cars behave. But many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs. Assessment tools currently used in the criminal justice system must consider risks to society against harms to individual defendants; autonomous weapons will need to weigh the lives of soldiers against those of civilians.
The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.
“We as humans want multiple incompatible things,” says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue. “There are many high-stakes situations where it’s actually inappropriate—perhaps dangerous—to program in a single objective function that tries to describe your ethics.”
These solutionless dilemmas aren’t specific to algorithms. Ethicists have studied them for decades and refer to them as impossibility theorems. So when Eckersley first recognized their applications to artificial intelligence, he borrowed an idea directly from the field of ethics to propose a solution: what if we built uncertainty into our algorithms?
“We make decisions as human beings in quite uncertain ways a lot of the time,” he says. “Our behavior as moral beings is full of uncertainty. But when we try to take that ethical behavior and apply it in AI, it tends to get concretized and made more precise.” Instead, Eckersley proposes, why not explicitly design our algorithms to be uncertain about the right thing to do?
Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We’d have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers—even if we weren’t actually sure or didn’t think that should always be the case. The algorithm’s design leaves little room for uncertainty.
The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn’t specify a preference between friendly soldiers and friendly civilians.
In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers.
The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. “Have the system be explicitly unsure,” he says, “and hand the dilemma back to the humans.”
Carla Gomes, a professor of computer science at Cornell University, has experimented with similar techniques in her work. In one project, she’s been developing an automated system to evaluate the impact of new hydroelectric dam projects in the Amazon River basin. The dams provide a source of clean energy. But they also profoundly alter sections of river and disrupt wildlife ecosystems.
“This is a completely different scenario from autonomous cars or other [commonly referenced ethical dilemmas], but it’s another setting where these problems are real,” she says. “There are two conflicting objectives, so what should you do?”
“The overall problem is very complex,” she adds. “It will take a body of research to address all issues, but Peter’s approach is making an important step in the right direction.”
It’s an issue that will only grow with our reliance on algorithmic systems. “More and more, complicated systems require AI to be in charge,” says Roman V. Yampolskiy, an associate professor of computer science at the University of Louisville. “No single person can understand the complexity of, you know, the whole stock market or military response systems. So we’ll have no choice but to give up some of our control to machines.”
An earlier version of this story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, subscribe here for free.