A Case Against Mission-Critical Applications of Machine Learning
How can we trust the networks?" They answered: "We know that a network is quite reliable when its inputs come from its training set. But these critical systems will have inputs corresponding to new, often unanticipated situations. There are numerous examples where a network gives poor responses for untrained inputs." David Lorge Parnas followed up on this discussion in his Letter to the Editor (Feb. We wish to point out that machine learning-based systems, including commercial ones performing safety critical tasks, can fail not only under "unanticipated situations" (noted by Lewis and Denning) or "when it encounters data radically different from its training set" (noted by Parnas), but also under normal situations, even on data that is extremely similar to its training set. The Apollo self-driving team confirmed "it might happen" because the system was "deep learning trained." Now, after a further investigation, we have found that in 24 of these 27 failed tests, the 10 random points ...
Jul-25-2019, 16:49:40 GMT
- Country:
- North America > United States
- Washington > King County > Issaquah (0.05)
- Oceania > Australia
- New South Wales > Wollongong (0.05)
- North America > United States
- Industry:
- Transportation (0.32)
- Technology: