We're Making Progress in Explainable AI, but Major Pitfalls Remain
Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job. But machine learning algorithms cannot explain the decisions they make. How can we justify putting these systems in charge of decisions that affect people's lives if we don't understand how they're arriving at those decisions? This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.
Nov-18-2019, 16:37:55 GMT
- Technology: