Practical Machine Learning Safety: A Survey and Primer
Mohseni, Sina, Wang, Haotao, Yu, Zhiding, Xiao, Chaowei, Wang, Zhangyang, Yadawa, Jay
–arXiv.org Artificial Intelligence
Among different ML models, Deep Neural Networks (DNNs) [130] are well-known and widely used for their powerful representation learning from high-dimensional data such as images, texts, and speech. However, as ML algorithms enter sensitive real-world domains with trustworthiness, safety, and fairness prerequisites, the need for corresponding techniques and metrics for high-stake domains is more noticeable than before. Hence, researchers in different fields propose guidelines for Trustworthy AI [208], Safe AI [5], and Explainable AI [155] as stepping stones for next generation Responsible AI [6, 247]. Furthermore, government reports and regulations on AI accountability [75], trustworthiness [216], and safety [31] are gradually creating mandating laws to protect citizens' data privacy, fair data processing, and upholding safety for AI-based products. The development and deployment of ML algorithms for open-world tasks come with reliability and dependability limitations rooting from model performance, robustness, and uncertainty limitations [156]. Unlike traditional code-based software, ML models have fundamental safety drawbacks, including performance limitations on their training set and run-time robustness in their operational domain.
arXiv.org Artificial Intelligence
Jun-9-2021
- Country:
- North America > United States > Texas > Travis County > Austin (0.14)
- Genre:
- Overview (0.93)
- Research Report > New Finding (1.00)
- Industry:
- Education (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)