Curated by UCL

The roadblock between AI and self driving cars

Achieving autonomy has historically been one of the most important goals of technological development. From the invention of algebra to the conception of the computer, we are constantly searching for ways to make our lives easier and more efficient. One of the largest emerging sectors in this vein is the autonomous vehicle industry, which is projected to be valued at $87 Billion by the year 2030.1 For this to be achieved, consumers and business alike will have to accept and adopt the new technology. At the moment, the key barrier to this is safety.

The artificial intelligence (AI) that is currently used in autonomous vehicles is not programmed by engineers directly. Rather, they use a self-taught algorithm where the computer analyses a human driver and tries to determine which aspects of the environment are the most influential in their decision making. This is an example of ‘deep learning’, and Nvidia are one of the pioneers in this market.2 The problem with this approach is that the designers of these algorithms keep them hidden in a “black box” so that others cannot copy their design, making it incredibly tricky for governments and external organisations to determine how safe these algorithms are. Without transparency and proper cooperation surrounding these emerging technologies, government organisations like the NHTSA (National Highway Traffic Safety Administration) in America could deem autonomous vehicles too dangerous and could significantly slow progress on their adoption and development. Indeed, this is currently the case in America, where autonomous vehicle testing is not at a high enough level of sophistication to remain safe.3

Currently, the main methodology used for testing these vehicles is to drive them through preset courses until they encounter an error or mistake. To rectify this, the exact condition that caused the failure must be recreated, which can take hundreds of attempts.4 To fix the error in the algorithm, a human driver must then show the computer the correct approach. Because of this, ensuring that every possible situation imaginable is covered would take an unrealistic amount of time. To put this in perspective, Google’s fleet of 55 vehicles only covered 1.3 million miles from 2009 to 2015.5 This is far from being enough to create a comprehensively safe database for autonomous vehicles.

To make matters worse, the engineers of the autonomous systems don’t always precisely know what stimuli their algorithms are responding to. When a hazard comes into view of the cameras – a stationary animal, for example – the human driver takes an action to avoid it. Based on this action, the computer may only register the specific colour of the animal, and may learn to avoid items of that colour. Only when the vehicle encounters an animal of a different colour would this problem become apparent. To try to understand what exactly the computer is learning, systems engineers have put in place analytical solutions that show how the algorithms determine which action should be taken.

Because self-driving technology is still in its infancy, there is currently no ideal solution to ensuring safety. With an aim to guaranteeing safety, virtual testing is now being used to increase the scenarios that autonomous vehicles encounter. The development of such AI safety tests could create a whole new sector, perhaps to even rival the creation of the AI itself.6 I believe a better solution would be to combine an open-source software like Udacity with these tests. This comes with several benefits. Firstly, it allows complete transparency of the algorithms used, meaning that safety can be analysed in detail by anyone with a concern about the algorithm. Secondly, it encourages rapid progression in the software and an equal bed for competition. This stops one company dominating the market, as everyone would have access to a competent AI program for self-driving cars and companies would therefore have to innovate to separate themselves from the crowd.7

On the other hand, there is much debate as to whether open-source software is more secure than proprietary software. With open source software, everything is laid bare – all security flaws and bugs are public – which is argued to be insecure, as anyone with malicious intent can use this information. However, it also encourages incredibly rapid fixes to the code. In contrast, proprietary software can have security holes left in for years before someone notices. Personally, I feel that an open approach forces innovation and progress at faster rates compared with proprietary systems, which can lead to complacency and negligence.8 What is clear, though, is that security cannot be forgotten. Autonomous assisted vehicles are already being hacked remotely, posing yet another barrier to the rise of the self-driving car.9

Whichever solution is chosen will greatly impact how quickly autonomous vehicles integrate into our everyday lives. It’s not a question of if, but when. I personally believe that for the most seamless and rapid integration, open source software is the way forward. This is not necessarily because it is the most secure or technically the most capable, but because it is transparent. Society fears change and the unknown. If autonomous vehicles are to succeed in the future, we must keep their development open and understandable for all.

  1. Lux Research, “Self-driving Cars an $87 Billion Opportunity in 2030, Though None Reach Full Autonomy,” Lux Research, Boston, 2014.
  2. M. Bojarsk, B. Firner, B. Flepp, L. Jackel, U. Muller and K. Zieba, “End-to-End Deep Learning for Self-Driving Cars,” 17 August 2016. [Online]. Available: https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/.
  3. NHTSA, “Preliminary Statement of Policy,” NHTSA, Washington, D.C., 2016.
  4. D. Sabin, “The Paradox of Safety Testing Autonomous Cars,” 1 September 2017. [Online]. Available: https://www.inverse.com/article/35697-self-driving-car-tests-safe.
  5. N. Kalra, “Why It’s Nearly Impossible to Prove Self-Driving Cars’ Safety Without a New Approach,” 15 January 2016. [Online]. Available: https://www.rand.org/blog/2016/05/why-its-nearly-impossible-to-prove-self-driving-cars.html.
  6. K. Baekgyu, K. Yusuke and D. Siyuan, “Testing Autonomous Vehicle Software in the Virtual Prototyping Environment,” IEEE Embedded Systems Letters, pp. 5 – 8, 2017.
  7. O. Cameron, “We’re Building an Open Source Self-Driving Car,” 29 September 2016. [Online]. Available: https://medium.com/udacity/were-building-an-open-source-self-driving-car-ac3e973cd163.
  8. R. Clarke, D. Dorwin and R. Nash, “Is Open Source Software More Secure?,” University of Washington, Washington, 2009.
  9. Keen Security Lab of Tencent, “Car Hacking Research: Remote Attack Tesla Motors,” 19 September 2016. [Online]. Available: http://keenlab.tencent.com/en/2016/09/19/Keen-Security-Lab-of-Tencent-Car-Hacking-Research-Remote-Attack-to-Tesla-Cars/.
  10. U. Muller, K. Choromanski, B. Firner, L. Jackel, A. Choromanaska, P. Yeres and M. Bojarski, “Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car,” Nvidia, ArXiv Computer Vision and Pattern Recognition, 2017.

About the Author

UCL

I am currently in my first year of studying Mechanical Engineering at UCL. I enjoy designing and creating projects and I have lots of experience with computing aswell.

Articles

Why Some Cracks Repel
To catch a neutrino