Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS)
Hawkins, Richard, Paterson, Colin, Picardi, Chiara, Jia, Yan, Calinescu, Radu, Habli, Ibrahim
–arXiv.org Artificial Intelligence
Machine Learning (ML) is now used in a range of systems with results that are reported to exceed, under certain conditions, human performance. Many of these systems, in domains such as healthcare, automotive and manufacturing, exhibit high degrees of autonomy and are safety critical. Establishing justified confidence in ML forms a core part of the safety case for these systems. In this document we introduce a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). AMLAS comprises a set of safety case patterns and a process for (1) systematically integrating safety assurance into the development of ML components and (2) for generating the evidence base for explicitly justifying the acceptable safety of these components when integrated into autonomous system applications. The material in this document is provided as guidance only. No responsibility for loss occasioned to any person acting or refraining from action as a result of this material or any comments made can be accepted by the authors or The University of York.
arXiv.org Artificial Intelligence
Feb-2-2021
- Country:
- Europe > United Kingdom (0.28)
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Aerospace & Defense > Aircraft (0.67)
- Automobiles & Trucks (1.00)
- Health & Medicine
- Diagnostic Medicine (0.93)
- Therapeutic Area (1.00)
- Information Technology > Robotics & Automation (0.67)
- Transportation > Ground
- Road (1.00)
- Technology: