Automated vehicles have the potential to revolutionise our day-to-day lives, but these kind of cyber-physical systems are vulnerable to attack by criminals. Horizon spoke with Dr Alexander Kröller, a research manager at Dutch navigation company TomTom, to explore the risks that hacking and viruses pose to self-driving cars.
What systems does a self-driving car require to work?
‘An autopilot taking a car on the road is fairly easy to achieve, all you need is a few sensors, it could just be a camera that sees what’s directly in front of you, follows your lane and tries not to bump into cars. For the next step, telling the car, “I want to go to my favourite restaurant,” there is a lot of data that needs to be provided. The car needs an up-to-date map to figure out where to go, which has to be absolutely accurate. This is provided through online services.
‘On a more local scale is the domain of vehicle-to-vehicle communication and real-time services, where a car gets live and frequent updates on what is happening in front and behind of it, and around the corner. There are sensors to measure where exactly the car is, cameras and radar or LIDAR (Light Detection and Ranging) sensors to identify other vehicles, obstacles or pedestrians crossing the road. These combine to give the car a consciousness of its immediate surroundings and tell it where it should and should not drive.’
Is there a cybersecurity risk with automated vehicles?
‘Even with today’s connected vehicles, there are cars taking in information from the outside over wireless connections and then taking decisions on behalf of the driver. That raises several issues and the most important one is security.
‘A very simple example with automated vehicles would be stealing one remotely. The appeal to a hacker can simply be profit or to gain leverage over the owner or the manufacturer of the vehicle. This scenario isn’t the most likely risk we should be worried about. Even just disabling it or making it drive through the wrong neighbourhood is something that could be taken advantage of. Hackers could also use the car to obstruct traffic, or create roads that are completely void of traffic.’
What about viruses?
‘Every computer system is still just a computer system. If people spend enough money and energy in developing a virus or Trojan horse (a malicious computer program) then it is entirely possible to have one for automated vehicles. The big difference is that the usual ways for a virus to enter a system are not available because users are not randomly installing applications or looking at internet content on safety-critical systems in the car. The system is much more controlled, but car systems become more interconnected with more advanced features.
‘If you come up with the scenario that someone planted a worm (a computer program that replicates itself and spreads to other computers) in one brand of vehicle then this hacker could start asking for money. The same way you have with Trojans where suddenly you have your hard drive encrypted and the victim is asked to hand over a ransom to fix it. In an automated vehicle case, hackers could leverage money from the drivers to release their cars or from the manufacturer to give back control to their fleet.’
Hackers often just want to remove restrictions imposed by the manufacturer and install third-party applications, a process known as jailbreaking. What’s the risk of someone jailbreaking an automated car?
‘Jailbreaking your car or tampering with the systems and then driving around in it in the hope that the car still functions properly is something you shouldn’t do. However, some people are very willing to take that risk. In that sense, we have to make sure that safety-critical components (elements that ensure safe performance) keep the amount of tampering people can do with a car to a minimum. Somebody willing to risk his own life by tampering with the software system in an automated car is also risking the lives of others on the road.’
How far along are we in developing appropriate security measures for automated cars?
‘The complexity in cars, both in software and hardware, is constantly growing, meaning there is an increasing amount of potential attacks, while the knowledge among hackers is also increasing. It’s a cat and mouse game. As far as I can see with organisations in this industry the level of security is progressing in line with the level of complexity we are putting into automated cars. Everybody is taking great care not to release anything to the public that is not secure, but at the same time the threats and demands are increasing, so we have to prepare for the next steps.’
‘Hackers could also use the car to obstruct traffic, or create roads that are completely void of traffic.’
Dr Alexander Kröller, TomTom
You are involved in SAFERtec, a recently launched EU project looking to advance security levels for connected vehicle systems. What will you be working on?
‘We will examine and prepare for the next threats that are going to come when connected vehicles become more and more widespread. We are going to develop a connected vehicle system and then, through appropriate modelling, determine the necessary protection profiles (specific security requirements) to identified risks, which may impact human safety.
‘Several partners in the project have experience in attack modelling, penetration testing (ways of assessing vulnerabilities in an IT system) as well as auditing software and security systems. When we have all that together we are going to distil the security requirements for the future of connected vehicles and then analyse the existing assurance frameworks. We want to end this project with an efficient way to reach a high level of security in tomorrow’s world of connected vehicles.’
The technology behind automated vehicles is moving incredibly fast, where should our focus be to safely introduce automated vehicles?
‘Discussions about automated vehicles are dominated in the public by a few not really central questions – is it going to be safe or will these cars kill all of us? But that’s not how it works. What we are going to see is gradually more automated vehicles driving next to classic human-driven vehicles, on the roads we have today. In the public’s mind, they fear the havoc these automated vehicles will cause on the roads, but if you turn it around, an automated vehicle is going to be a well-behaved, passive and very careful driver. It’s more about how we protect automated vehicles from reckless human drivers than the other way around.’
Artificial intelligence (AI) used by governments and the corporate sector to detect and extinguish online extreme speech often misses important cultural nuance, but bringing in independent factcheckers as intermediaries could help step up the fight against online vitriol, according to Sahana Udupa, professor of media anthropology at Ludwig Maximilian University of Munich, Germany.
Retrofitting Europe’s buildings for energy efficiency is not enough to slash the carbon footprint of the construction sector and cut emissions in time to meet the Paris climate agreement goals, according to Dr Catherine De Wolf, assistant professor of design and construction management at TU Delft in the Netherlands.
Stone and concrete structures with the ability to heal themselves in a similar way to living organisms when damaged could help to make buildings safer and last longer.
In the summer of 2014 a strange building began to take shape just outside MoMA PS1, a contemporary art centre in New York City. It looked like someone had started building an igloo and then got carried away, so that the ice-white bricks rose into huge towers. It was a captivating sight, but the truly impressive thing about this building was not so much its looks but the fact that it had been grown.
Bacteria can give structures an ‘in-built immune system’ to help them last longer.
Independent factcheckers can bring context to AI tools, says media anthropologist.
Dr Kate Rychert studies ocean plate structures.