A woman uses her voice assistant on her phone.
kyonntra/Getty Images

Artificial Intelligence / Machine Learning

How Apple personalizes Siri without hoovering up your data

The tech giant is using privacy-preserving machine learning to improve its voice assistant while keeping your data on your phone.

Dec 11, 2019
A woman uses her voice assistant on her phone.
kyonntra/Getty Images

If you’ve got an iPhone, you may have noticed a change in Siri’s behavior in the past year. The voice assistant on the phone will “wake up” when you say “Hey Siri,” but not when the same phrase comes from your friends or family.

The reason Apple did this was sensible: it wanted a way to keep all the iPhones in a room from responding when one person utters the wake phrase. You might think that Apple would need to collect a lot of your audio data to do this. Surprisingly, it doesn't.

Instead, it relies primarily on a technique called federated learning, Apple’s head of privacy, Julien Freudiger, told an audience at the Neural Processing Information Systems conference on December 8. Federated learning is a privacy-preserving machine-learning method that was first introduced by Google in 2017. It allows Apple to train different copies of a speaker recognition model across all its users’ devices, using only the audio data available locally. It then sends just the updated models back to a central server to be combined into a master model. In this way, raw audio of users’ Siri requests never leaves their iPhones and iPads, but the assistant continuously gets better at identifying the right speaker.

In addition to federated learning, Apple also uses something called differential privacy to add a further layer of protection. The technique injects a small amount of noise into any raw data before it is fed into a local machine-learning model. The additional step makes it exceedingly difficult for malicious actors to reverse-engineer the original audio files from the trained model.

Though Apple has been using differential privacy since 2017, it’s been combined with federated learning only as of iOS 13, which rolled out to the public in September of this year. In addition to personalizing Siri, both techniques are now being used for a few other applications as well, including QuickType (Apple’s personalized keyboard) and the Found In Apps feature, which scans your calendar and mail apps for the names of texters and callers whose numbers aren’t in your phone. Freudiger said the company plans to roll out the privacy methods to more apps and features soon.

In the past year, federated learning has grown increasingly popular within the AI research community as concerns about data privacy have grown. In March, Google released a new set of tools to make it easier for developers to implement their own federating learning models. Among many other uses, researchers hope it will help overcome privacy challenges in the application of AI to health care. Companies including Owkin, Doc.ai, and Nvidia are interested in using it in this way.

While the technique is still relatively new and needs further refinement, Apple’s latest adoption offers another case study for how it can be applied at scale. It also marks a fundamental shift in the trade-off the tech industry has traditionally assumed between privacy and utility: in fact, it’s now possible to achieve both. Let’s hope other companies quickly catch on.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.