Original Link: https://www.anandtech.com/show/11915/gtc-europe-live-blog-how-to-get-regulatory-approval-for-an-aibased-autonomous-car
GTC Europe Live Blog: How to Get Regulatory Approval for an AI-based Autonomous Car
by Ian Cutress on October 10, 2017 10:31 AM EST- Posted in
- Trade Shows
- NVIDIA
- Live Blog
- Automotive
- Autonomous
- GTC
10:33AM EDT - Most of the autonomous driving problems will be solved by AI
10:34AM EDT - Homologation in the car industry required: car manufacturers, suppliers, engineering services, laws/regulations
10:35AM EDT - The Vienna convention still states that the driver has to be in control of the car at all times
10:35AM EDT - ISO 26262 standard is the current bylaw for safety in the automotive industry for guaranteed safety functions
10:35AM EDT - All the parties need to come together to develop the right safety to benefit all road users
10:36AM EDT - Safety is usually measured in numbers of crashes
10:36AM EDT - From small touches to metal bending to full on collisions with injuries and fatalities
10:36AM EDT - Correlation to common human issues that create collisions - trying to remove the human element and increase safety
10:37AM EDT - Determining who is at fault: when it is physically impossible to avoid a collision
10:37AM EDT - The goal of the autonomous car should be to avoid 'at fault' accidents
10:37AM EDT - E.g. avoid someone else jumping a red light
10:38AM EDT - Also focusing on driving performance: smoothness, comfort, speed, bandwidth required, if it needs a remote control center to intervene, if it can access certain areas
10:39AM EDT - The faster you go, the shorter available time to react, even for autonomous vehicles
10:39AM EDT - It's easier to make an autonomous car safer by driving slower, but everyone wants to get from A to B as quick as possible
10:39AM EDT - Also balancing with fuel efficiency
10:40AM EDT - Safety is number one, but performance is still needed to get a product
10:40AM EDT - It's worth talking about the acceptable goals now before the technology is ready
10:41AM EDT - 1) Rational AI systems, 2) Road Safety, 3) Normally the trip to the airport is more dangerous than the flight itself - time to change that
10:41AM EDT - 4) Battling Man vs Machines, 5) Assume a 'perfect enemy' for safety, 6) Investment
10:43AM EDT - The upside to all of this is the enthusiasm - OEMs and startups want to achieve the self-driving car goal
10:43AM EDT - Some misconceptions on autonomous driving
10:44AM EDT - 1) The moral dilemma - run over the granny or the child. A) If that's the only problem we have left, then autonomous driving is almost perfect as this problem doesn't happen very often
10:45AM EDT - 2) Needing more orders of magnitude testing than currently done A) Measure the accident rate per distance. Autonomous driving is several orders of magnitude fewer accidents per mile/per km than regular drivers.
10:46AM EDT - 3) Human intuition is bad about how often rare events occur - A) Use a more data-driven approach
10:47AM EDT - 4) The rules of the road are for humans, not AI, such as following distance, e.g. 3 second gap behind the car ahead will be quite large and cars will merge into the path autonomous cars - A) Research
10:48AM EDT - 5) The best way to solve the problem is not always to do it like a human - A) Train AI with more real-world data
10:48AM EDT - 6) Auditing the software for quality - A) industry standards and regulatory bodies
10:48AM EDT - Now on Redundancy
10:49AM EDT - Redundancy is not a goal, it's an average. The goal is failures per billion hours
10:49AM EDT - Balancing precision vs recall
10:50AM EDT - Trading false positives/false negatives
10:50AM EDT - Monitoring is a potential limitation, e.g. obscured cameras, can you detect and manage it
10:51AM EDT - Hard part is detection: makes sense for hardware failures (e.g. ECC), but not for algorithm failure.
10:51AM EDT - The system needs a fallback, but when and how to transition, how to know fallback is better
10:52AM EDT - Sometimes backup systems are less tested than primary systems
10:52AM EDT - Driving is all about defensive safety
10:52AM EDT - Tracking risk and uncertainty in a secondary system might not be up to scratch if not done properly
10:53AM EDT - New systems should be data driven and validated through big data sets
10:53AM EDT - Limit expert assessment, use real-world data for simulations and variations therein
10:53AM EDT - testing the AI with subclass tests
10:54AM EDT - Or validation through fundamental neural network analysis - are the neurons weighted correctly and how do we test
10:55AM EDT - All this means common goals, common targets. Not just industry but relies on data, lawmakers, and solving societal worries
10:55AM EDT - Now for Q&A
10:57AM EDT - Q:With level 4 and level 5, the problem is urban driving. How would you test in places like Munich where autonomous driving is not allowed legally in urban areas? A) When testing a level 4 or level 5, you have a driver in the car, so it becomes essentially a level 2 because the driver can override and stay alert. So you can train level 4 and level 5 in a 'level 2' environment and there is no additional risk that falls foul of legal issues
10:58AM EDT - Q: ISO 26262. Drivers are not ASIL-D compliant, but these systems will need it. So far AI is not certifiable, so how will this start to happen? Adjustment in ISO standards? How can you be sure that the process in training the network will be aligned with the standard?
11:00AM EDT - A: ISO 26262 standard will evolve to support other validation methods. Right now it is based on moving forward but as a community we can develop methods around brute force validation and beyond for simulation and resimulation in different environments. It has to be standard at the high level (you run a set of data, almost like a driving test, for each AI)