GTC Europe Live Blog: How to Get Regulatory Approval for an AI-based Autonomous Car
by Ian Cutress on October 10, 2017 10:31 AM EST- Posted in
- GTC
- Trade Shows
- NVIDIA
- Live Blog
- Automotive
- Autonomous
10:33AM EDT - Most of the autonomous driving problems will be solved by AI
10:34AM EDT - Homologation in the car industry required: car manufacturers, suppliers, engineering services, laws/regulations
10:35AM EDT - The Vienna convention still states that the driver has to be in control of the car at all times
10:35AM EDT - ISO 26262 standard is the current bylaw for safety in the automotive industry for guaranteed safety functions
10:35AM EDT - All the parties need to come together to develop the right safety to benefit all road users
10:36AM EDT - Safety is usually measured in numbers of crashes
10:36AM EDT - From small touches to metal bending to full on collisions with injuries and fatalities
10:36AM EDT - Correlation to common human issues that create collisions - trying to remove the human element and increase safety
10:37AM EDT - Determining who is at fault: when it is physically impossible to avoid a collision
10:37AM EDT - The goal of the autonomous car should be to avoid 'at fault' accidents
10:37AM EDT - E.g. avoid someone else jumping a red light
10:38AM EDT - Also focusing on driving performance: smoothness, comfort, speed, bandwidth required, if it needs a remote control center to intervene, if it can access certain areas
10:39AM EDT - The faster you go, the shorter available time to react, even for autonomous vehicles
10:39AM EDT - It's easier to make an autonomous car safer by driving slower, but everyone wants to get from A to B as quick as possible
10:39AM EDT - Also balancing with fuel efficiency
10:40AM EDT - Safety is number one, but performance is still needed to get a product
10:40AM EDT - It's worth talking about the acceptable goals now before the technology is ready
10:41AM EDT - 1) Rational AI systems, 2) Road Safety, 3) Normally the trip to the airport is more dangerous than the flight itself - time to change that
10:41AM EDT - 4) Battling Man vs Machines, 5) Assume a 'perfect enemy' for safety, 6) Investment
10:43AM EDT - The upside to all of this is the enthusiasm - OEMs and startups want to achieve the self-driving car goal
10:43AM EDT - Some misconceptions on autonomous driving
10:44AM EDT - 1) The moral dilemma - run over the granny or the child. A) If that's the only problem we have left, then autonomous driving is almost perfect as this problem doesn't happen very often
10:45AM EDT - 2) Needing more orders of magnitude testing than currently done A) Measure the accident rate per distance. Autonomous driving is several orders of magnitude fewer accidents per mile/per km than regular drivers.
10:46AM EDT - 3) Human intuition is bad about how often rare events occur - A) Use a more data-driven approach
10:47AM EDT - 4) The rules of the road are for humans, not AI, such as following distance, e.g. 3 second gap behind the car ahead will be quite large and cars will merge into the path autonomous cars - A) Research
10:48AM EDT - 5) The best way to solve the problem is not always to do it like a human - A) Train AI with more real-world data
10:48AM EDT - 6) Auditing the software for quality - A) industry standards and regulatory bodies
10:48AM EDT - Now on Redundancy
10:49AM EDT - Redundancy is not a goal, it's an average. The goal is failures per billion hours
10:49AM EDT - Balancing precision vs recall
10:50AM EDT - Trading false positives/false negatives
10:50AM EDT - Monitoring is a potential limitation, e.g. obscured cameras, can you detect and manage it
10:51AM EDT - Hard part is detection: makes sense for hardware failures (e.g. ECC), but not for algorithm failure.
10:51AM EDT - The system needs a fallback, but when and how to transition, how to know fallback is better
10:52AM EDT - Sometimes backup systems are less tested than primary systems
10:52AM EDT - Driving is all about defensive safety
10:52AM EDT - Tracking risk and uncertainty in a secondary system might not be up to scratch if not done properly
10:53AM EDT - New systems should be data driven and validated through big data sets
10:53AM EDT - Limit expert assessment, use real-world data for simulations and variations therein
10:53AM EDT - testing the AI with subclass tests
10:54AM EDT - Or validation through fundamental neural network analysis - are the neurons weighted correctly and how do we test
10:55AM EDT - All this means common goals, common targets. Not just industry but relies on data, lawmakers, and solving societal worries
10:55AM EDT - Now for Q&A
10:57AM EDT - Q:With level 4 and level 5, the problem is urban driving. How would you test in places like Munich where autonomous driving is not allowed legally in urban areas? A) When testing a level 4 or level 5, you have a driver in the car, so it becomes essentially a level 2 because the driver can override and stay alert. So you can train level 4 and level 5 in a 'level 2' environment and there is no additional risk that falls foul of legal issues
10:58AM EDT - Q: ISO 26262. Drivers are not ASIL-D compliant, but these systems will need it. So far AI is not certifiable, so how will this start to happen? Adjustment in ISO standards? How can you be sure that the process in training the network will be aligned with the standard?
11:00AM EDT - A: ISO 26262 standard will evolve to support other validation methods. Right now it is based on moving forward but as a community we can develop methods around brute force validation and beyond for simulation and resimulation in different environments. It has to be standard at the high level (you run a set of data, almost like a driving test, for each AI)
9 Comments
View All Comments
vortmax2 - Tuesday, October 10, 2017 - link
Interesting stuff. Seems Lvl 4 and 5 is a long way off. I'm really liking tech like Subaru Eyesight and equiv...every car should be equipped with that type of system.Yojimbo - Wednesday, October 11, 2017 - link
To bootstrap level 4 and 5 autonomous driving will probably require training in simulated (virtual) environments. I would think that then the system could be demonstrated under level 3 circumstances in the real world. Each time a human has to take over it would be considered a failure. If the system proves itself under extensive testing of this sort, then I assume regulators would be willing to allow that system to operate as a level 4 or level 5 vehicle.So it's definitely several years off, but I don't think it has to be a long way off, depending on what you meant by "a long way".
vortmax2 - Wednesday, October 11, 2017 - link
Thinking 7-10 yrs. Hoping sooner as this tech could save many lives - with that said, there should be some studies around the social and legal implications of fully autonomous driving. I'd bet there will be big ramifications in both those spaces that aren't comprehended (e.g., smartphone tech and their impact on our younger generations).Today, I'd like to see a new, mandatory safety standard for all cars sold in the US to included both the lane and brake assist tech. These alone would save many lives and the investment costs easily covered by less insurance claims. Have the insurance companies subsidize some of the cost. Just a thought.
Yojimbo - Wednesday, October 11, 2017 - link
I think 7 to 10 years sounds like a fair estimate. Probably no less than 5.Yojimbo - Wednesday, October 11, 2017 - link
Just a small comment. I think when the presenter wrote "Redundancy is not a goal, it's a mean" he meant to instead write "Redundancy is not a goal, it's a means", as in, it's a way to achieve a goal. The goal is to have a low failure in time (FIT) rate and redundancy is a means to achieve that goal.vortmax2 - Wednesday, October 11, 2017 - link
I'm surprised there aren't any more comments on this...tech that will impact more lives (no pun intended) than any SSD, DDR, CPU, or GPU...lolzodiacfml - Sunday, October 15, 2017 - link
I'm beginning to feel that max speed will be restricted for autonomous cars at around less than 100kph. The argument would be to minimize damage if it fails. The slower speed would be easily favored as it is more economical, less damage potential, and few people would care as they will be to do other thingsacochrane - Monday, November 6, 2017 - link
The mean time between failures will be way lower than it is for windows os. My opinion is that drivers who don't join the flow of traffic cause more uncertainty for everyone else on the road. Likewise, limiting the speed of an autonomous car below the average rate of traffic would make it a headache for the rest of traffic and result in more complicated driving patterns, increasing the chance for an accident to occur. I think eventually the safest and most efficient traffic will occur when all the cars are autonomous.