Just a month ago the EU outlined its new AI and data governance strategy, which, among other things, advocated data sovereignty and called for European AI to be trained only on European data to ensure its quality and ethical sourcing. The guidelines were lauded for their leadership in protecting data privacy and facilitating trustworthy AI. But according to the Financial Times, the coronavirus pandemic is now forcing regulators to rethink them.
Such restrictions, if they worked as originally envisaged, would risk slowing the pace of progress as scientists rush to develop vaccines and algorithms in the fight against the disease. Though regulators have not yet walked back on their original recommendations, they have pushed off their deadline for implementing legislation that would have set the pace for regulatory bodies around the world.
You can read all our coverage of the coronavirus/Covid-19 outbreak for free, and also sign up for our coronavirus newsletter. But please consider subscribing to support our nonprofit journalism.
The trade-offs that EU regulators are facing mirror the tug-of-war between data privacy and public health that many governments and companies are now grappling with. Rapid access to data—wherever it is—is important to fighting the outbreak. But the loosening of data privacy measures has also been controversial. “You might as well ask yourself, has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” wrote Hu Yong, a well-known critic in China and a professor at Peking University’s School of Journalism and Communication.
While this tension has always been present, the sheer urgency of containing an exponentially spreading virus has thrown it into sharp relief. Many countries that have successfully contained their outbreaks, including China, South Korea, and Singapore, have utilized aggressive surveillance measures to track and isolate infected individuals. Other countries that have been trigger-shy about similar measures, like Italy and Spain, now face devastating caseloads that have overwhelmed their health-care systems. The US, traditionally one of the most privacy-preserving governments, is now buckling under the pressure: the White House has begun talks with Google and Facebook about tapping into their data on users’ movements.
Speedy access to high-quality data is important beyond just surveillance. Machine-learning forecasters who are working to predict the trajectory of the virus, for example, also rely on as much rich and accurate data as possible. Currently, one of the leading forecast labs in the US is pooling together web-browsing behavior and social-media activity to help the government ramp up its testing capacity and determine appropriate interventions. But the lab is also seeking real-time feeds of retail behavior as well as anonymized health records, which it says would greatly improve its predictions.
Regulators and privacy advocates worry about what kind of precedent this could set. The World Economic Forum (WEF) released a statement last week urging firms not to lose sight of proper AI oversight simply to gain greater speed. “We need to keep in mind that the big ethical challenges around privacy, accountability, bias, and transparency of artificial intelligence remain,” said Kay Firth-Butterfield, WEF’s head of artificial intelligence and machine learning.
In his blog post, Yong recommends adhering to three principles in seeking the right balance between privacy and public health. First, lawmakers must treat intrusions on privacy in the public interest as exceptions rather than norms. Those exceptions must also be justified by human rights law. Second, lawmakers should define the basic civil rights guarantees that should hold even if privacy is weakened. “It is important to realize that, although privacy may sometimes have to be restricted for the sake of the wider public interest, privacy itself is a vital public interest,” he writes. Therefore, individuals shouldn’t have to “yield to the public interest in an unlimited fashion.” And finally, lawmakers should heavily restrict how they use any information collected during a crisis. It should not be diverted to other purposes, and it should be collected, stored, and processed with stringent security.
“Sometimes we think that technology will inevitably erode privacy,” Yong writes, but ultimately humans, not technology, make that choice, he adds: “The loss of privacy is not inevitable, just as its reconstruction is far from certain.”