Does this artificial intelligence think like a human?
In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts make sense of a model's reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns. Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model's behavior.
Apr-6-2022, 05:37:39 GMT
- Country:
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- Industry:
- Health & Medicine > Therapeutic Area > Dermatology (0.39)
- Technology: