Using hundreds of tiny sensors and cameras, backed by a powerful artificial intelligence computer, self-driving can drive themselves on public roads. To do this safely, they have to be very clever.
When the conditions are right, artificial intelligence systems are extremely clever. But when they encounter an unknown condition, they can also be quite unintelligent.
Which is how a self-driving car costing tens of thousands of dollars can be ‘broken’ using some sticky tape costing a few cents.
Learning what to expect
Like a child at school, artificial intelligence systems need to be trained. Researchers feed thousands of pieces of data into the artificial intelligence engine so it can learn how to drive the car. This includes millions of photographs and videos so that the car will recognise other vehicles, road markings, signs and general hazards it will encounter during a journey.
The artificial intelligence engine becomes excellent at recognising features which match the data they have been trained with. When the car sees a speed limit sign, it can brake or accelerate accordingly.
But if the car encounters something that it has not been trained for, the artificial intelligence system makes a best-estimate calculation – and this is where problems can arise.
The difference between human and artificial intelligence
Sometimes these unexpected differences can be incredibly subtle. Take the sign below:
Obviously there is something wrong with the sign, but a human driver can see there is a problem and recognise what the speed limit should be – 35. Artificial intelligence systems are not quite so clever. The tape on this sign confuses the car’s cameras; instead of “35”, the car sees “85” and accelerates far beyond the legal limit.
Other experiments have produced similar effects. With a few pieces of tape, the meaning of a street sign can be changed completely. When this happens, the self-driving car becomes more unpredictable – and dangerous.
The good news is that data scientists are aware of these problems and are actively working to prevent them. But just like any other aspect of computer security, there is a constant battle as specialists try to identify and patch exploits as quickly as criminals and hackers develop them.
A problem that is not going anywhere soon
The reality is that artificial intelligence is incredibly good for focused tasks with predictable events. These systems struggle when they encounter something that falls outside their training – even if the change is tiny, like a piece of tape.
As artificial intelligence becomes an even more important aspect of modern life, we can expect to see similar ‘hacks’ occurring – and not just on smart cars. Almost any system that can be fooled or broken, and as the sticky tape issue shows, these attacks are incredibly easy and effective.
3 comments
The GPS system in my car does *not* read the speed limit sign to know the speed limit. It obtains that info from the GIS database…
A GPS is not the same thing as a ‘Smart” car. The author was referring to the emerging self driving “smart” cars.
So at temporary roadworks or an incident where there are signs slowing you to say 20, your car ignores this and goes on old, stale data?
Fail!