‘Hacked’ the autopilot of a Tesla and this is what happens

by Kelvin
Tesla

The use of neural networks serves to incorporate specific patterns capable of deceiving the artificial vision of software, but not that of human beings.

Tencent Security Lab Keen, company of technological security, published last Friday a report in which they are detailed several successful cyberattacks against the autopilot software of Tesla vehicles.

  

Some of these 'hacks‘Are more or less common, such as the possibility of connect a gamepad to the car so that it cancels both the autopilot and the steering wheel of the same.

Read: Facebook commitment to artificial intelligence in their networks

But others, aimed at getting the vehicle to activate the windshield wipers even if it is not raining, or to convince you to drive along the wrong lane of the road, enter a relatively recent category, which we know as ‘antagonistic attacks’ or ‘adversarials’.

The antagonistic attacks They are methods that use neural networks to violate identification systems. They are systems that seek to resemble the performance of human beings and are vulnerable to antagonistic displays, hence their name, which can create identification errors.

The use of neural networks serves to incorporate specific patterns capable of deceiving the artificial vision of software, but not that of human beings.

We recommend: This is how artificial intelligence revolutionizes sales

How were Tencent Security Lab Keen researchers able to convince Tesla to circulate in the opposite lane?

The researchers explain that they turned to the adding ‘noise’ to lane markings (which still allowed them to be perfectly visible to humans) in order to deceive the autopilot and cause him to be unable to detect the lanes.

The proper design of ‘noise’ was calculated first in a virtual simulation, and then already applied in the real world.

The next step, however, was to use stickers which, once applied on the road, tricked the autopilot so that enter the vehicle in the opposite lane. Again, the virtual trial was the first step before using the stickers on a road.

“Inducing the autopilot with some patches made by a malicious attacker in error can sometimes be more dangerous than stop recognizing the lane.

It is enough to paint three small little visible frames in the image taken from the camera (…) so that the vehicle will consider the left lane as a continuation of the one on the right ”.

Although at the moment they do not pose any kind of danger, the problem is that they demonstrate how fragile artificial intelligence systems can be. ”

You may be interested: This is how Artificial Intelligence will revolutionize productivity

With information from Xataka

You may also be interested:

escort malatya escort bursa escort antalya escort konya mersin escort