safety force field
Rationale-aware Autonomous Driving Policy utilizing Safety Force Field implemented on CARLA Simulator
Suk, Ho, Kim, Taewoo, Park, Hyungbin, Yadav, Pamul, Lee, Junyong, Kim, Shiho
Despite the rapid improvement of autonomous driving technology in recent years, automotive manufacturers must resolve liability issues to commercialize autonomous passenger car of SAE J3016 Level 3 or higher. To cope with the product liability law, manufacturers develop autonomous driving systems in compliance with international standards for safety such as ISO 26262 and ISO 21448. Concerning the safety of the intended functionality (SOTIF) requirement in ISO 26262, the driving policy recommends providing an explicit rational basis for maneuver decisions. In this case, mathematical models such as Safety Force Field (SFF) and Responsibility-Sensitive Safety (RSS) which have interpretability on decision, may be suitable. In this work, we implement SFF from scratch to substitute the undisclosed NVIDIA's source code and integrate it with CARLA open-source simulator. Using SFF and CARLA, we present a predictor for claimed sets of vehicles, and based on the predictor, propose an integrated driving policy that consistently operates regardless of safety conditions it encounters while passing through dynamic traffic. The policy does not have a separate plan for each condition, but using safety potential, it aims human-like driving blended in with traffic flow.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
DRIVE Labs: Eliminating Collisions with Safety Force Field
Editor's note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Safety Force Field (SFF) vehicle software is designed specifically for collision avoidance. It acts as an independent supervisor on the actions of the vehicle's primary planning and control system, which could be either human-driven or autonomous. Specifically, SFF performs real-time double-checks of the controls that were chosen by the primary system. If SFF deems the controls to be unsafe, it will veto and correct the primary system's decision.
The limitations of AI safety tools
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain "safety constraints." At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning. Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI "safety tools" are as effective as their creators purport them to be -- or whether they make AI systems safer in any sense. "OpenAI's Safety Gym doesn't feel like'ethics washing' so much as maybe wishful thinking," Mike Cook, an AI researcher at Queen Mary University of London, told VentureBeat via email.
- North America > Canada > Ontario > Toronto (0.55)
- North America > United States > California > Alameda County > Berkeley (0.25)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.89)
DRIVE Labs: Eliminating Collisions with Safety Force Field – NVIDIA Developer News Center
Editor's note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Safety Force Field (SFF) vehicle software is designed specifically for collision avoidance. It acts as an independent supervisor on the actions of the vehicle's primary planning and control system, which could be either human-driven or autonomous. Specifically, SFF performs real-time double-checks of the controls that were chosen by the primary system. If SFF deems the controls to be unsafe, it will veto and correct the primary system's decision.
DRIVE Labs: Eliminating Collisions with Safety Force Field - NVIDIA Developer News Center
Safety Force Field (SFF) vehicle software is designed specifically for collision avoidance. It acts as an independent supervisor on the actions of the vehicle's primary planning and control system, which could be either human-driven or autonomous. Specifically, SFF performs real-time double-checks of the controls that were chosen by the primary system. If SFF deems the controls to be unsafe, it will veto and correct the primary system's decision. SFF is provably safe, in the sense that, if all road participants comply with SFF and the perception and vehicle controls are within expected design margins, then it can be mathematically proven that no collisions can occur.
- Media > News (0.40)
- Information Technology > Hardware (0.40)