Hackers have demonstrated some worrisome ways to manipulate and confuse the various systems on a Tesla Model S. Their most dramatic feat: sending the car careening into the oncoming traffic lane by placing a series of small stickers on the road.
Attack vector: This an example of an “adversarial attack,” a way of manipulating a machine-learning model by feeding in a specially crafted input. Adversarial attacks could become more common as machine learning is used more widely, especially in areas like network security.
Blurred lines: Tesla’s Autopilot is vulnerable because it recognizes lanes using computer vision. In other words, the system relies on camera data, analyzed by a neural network, to tell the vehicle how to keep centered within its lane.
Traffic jamming: This isn’t the first adversarial attack on an autonomous driving system. Dawn Song, a professor at UC Berkeley, has used innocuous-looking stickers to trick a self-driving car into thinking a stop sign was a speed limit for 45 miles per hour. Another study, published in March, demonstrated how medical machine-learning systems can similarly be tricked into giving the wrong diagnoses.
Bug fixes: The researchers behind the lane-recognition hack, from the Keen Security Lab of Chinese tech giant Tencent, used a similar attack to disrupt the vehicle’s automatic windshield wipers. They also hijacked the car’s steering wheel using another method. A Tesla spokesperson told Forbes that the latter vulnerability has been fixed in its most recent software update. The spokesperson said the adversarial attack was unrealistic “given that a driver can easily override Autopilot at any time.”