safety-critical machine
Breakthrough in safety-critical machine learning could be just the beginning
Safety is the central focus on driverless vehicle systems development. Artificial intelligence (AI) is coming at us fast. It's being used in the apps and services we plug into daily without us really noticing, whether it's a personalized ad on Facebook, or Google recommending how you sign off your email. If these applications fail, it may result in some irritation to the user in the worst case. But we are increasingly entrusting AI and machine learning to safety-critical applications, where system failure results in a lot more than a slight UX issue.
- North America > United States > Pennsylvania (0.05)
- North America > United States > California (0.05)
- Health & Medicine > Health Care Technology (0.74)
- Health & Medicine > Diagnostic Medicine > Imaging (0.74)
- Health & Medicine > Therapeutic Area > Oncology (0.51)
AI researchers devise failure detection method for safety-critical machine learning
Researchers from MIT, Stanford University, and the University of Pennsylvania have devised a method for predicting failure rates of safety-critical machine learning systems and efficiently determining their rate of occurrence. Safety-critical machine learning systems make decisions for automated technology like self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and planes. Unlike AI that helps you write an email or recommends a song, safety-critical system failures can result in serious injury or death. Problems with such machine learning systems can also cause financially costly events like SpaceX missing its landing pad. Researchers say their neural bridge sampling method gives regulators, academics, and industry experts a common reference for discussing the risks associated with deploying complex machine learning systems in safety-critical environments. In a paper titled "Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems," recently published on arXiv, the authors assert their approach can satisfy both the public's right to know that a system has been rigorously tested and an organization's desire to treat AI models like trade secrets.
- Health & Medicine > Health Care Technology (0.94)
- Information Technology > Robotics & Automation (0.92)
- Transportation > Air (0.74)
- Health & Medicine > Surgery (0.57)