Goto

Collaborating Authors

When Should Someone Trust an AI Teammate's Predictions?

#artificialintelligence

Researchers have created a method to help workers collaborate with artificial intelligence systems. Researchers have created a method to help workers collaborate with artificial intelligence systems. In a busy hospital, a radiologist is using an artificial intelligence system to help her diagnose medical conditions based on patients' X-ray images. Using the AI system can help her make faster diagnoses, but how does she know when to trust the AI's predictions? Instead, she may rely on her expertise, a confidence level provided by the system itself, or an explanation of how the algorithm made its prediction -- which may look convincing but still be wrong -- to make an estimation.


When should someone trust an AI assistant's predictions?

#artificialintelligence

In a busy hospital, a radiologist is using an artificial intelligence system to help her diagnose medical conditions based on patients' X-ray images. Using the AI system can help her make faster diagnoses, but how does she know when to trust the AI's predictions? Instead, she may rely on her expertise, a confidence level provided by the system itself, or an explanation of how the algorithm made its prediction -- which may look convincing but still be wrong -- to make an estimation. To help people better understand when to trust an AI "teammate," MIT researchers created an onboarding technique that guides humans to develop a more accurate understanding of those situations in which a machine makes correct predictions and those in which it makes incorrect predictions. By showing people how the AI complements their abilities, the training technique could help humans make better decisions or come to conclusions faster when working with AI agents.


When should someone trust an AI assistant's predictions?

#artificialintelligence

In a busy hospital, a radiologist uses an artificial intelligence system to help her diagnose medical conditions based on patients' X-ray images. Using the AI system can help her make faster diagnoses, but how does she know when to trust the AI's predictions? Instead, she may rely on her expertise, a confidence level provided by the system itself, or an explanation of how the algorithm made its prediction -- which may look convincing but still be wrong -- to make an estimation. To help people better understand when to trust an AI "teammate," Massachusetts Institute of Technology researchers created a technique that guides humans to a more accurate understanding of when a machine makes correct predictions and when it makes incorrect ones. The research is supported by the U.S. National Science Foundation.


An automated health care system that understands when to step in

#artificialintelligence

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using a combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists better detect different forms of cancer. What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn't always merely a question of who does a task "better;" indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.


An automated health care system that understands when to step in

#artificialintelligence

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using a combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists better detect different forms of cancer. What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn't always merely a question of who does a task "better"; indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.