A Case Against Mission-Critical Applications of Machine Learning

Communications of the ACM 

How can we trust the networks?" They answered: "We know that a network is quite reliable when its inputs come from its training set. But these critical systems will have inputs corresponding to new, often unanticipated situations. There are numerous examples where a network gives poor responses for untrained inputs." David Lorge Parnas followed up on this discussion in his Letter to the Editor (Feb. We wish to point out that machine learning-based systems, including commercial ones performing safety critical tasks, can fail not only under "unanticipated situations" (noted by Lewis and Denning) or "when it encounters data radically different from its training set" (noted by Parnas), but also under normal situations, even on data that is extremely similar to its training set. The Apollo self-driving team confirmed "it might happen" because the system was "deep learning trained." Now, after a further investigation, we have found that in 24 of these 27 failed tests, the 10 random points ...

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found