Goto

Collaborating Authors

 ghassemi


AI may issue harsher punishments, severe judgments than humans: Study

FOX News

Chris Winfield, founder of Understanding A.I., tells'Fox & Friends Weekend' host Will Cain about a study showing patients preferred medical answers from artificial intelligence over doctors. Artificial intelligence fails to match humans in judgment calls and is more prone to issue harsher penalties and punishments for rule breakers, according to a new study from MIT researchers. The finding could have real world implications if AI systems are used to predict the likelihood of a criminal reoffending, which could lead to longer jail sentences or setting bail at a higher price tag, the study said. Researchers at the Massachusetts university, as well as Canadian universities and nonprofits, studied machine-learning models and found that when AI is not trained properly, it makes more severe judgment calls than humans. Human participants then labeled the photos or text, with their responses used to train AI systems.


Researchers analyse if AI is working the way it was meant to - The EE

#artificialintelligence

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer. These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain. As the field of machine learning has grown, artificial neural networks have grown along with it. Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them don't fully understand how they work.


How to tell if artificial intelligence is working the way we want it to

#artificialintelligence

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer. These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain. As the field of machine learning has grown, artificial neural networks have grown along with it. Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data.


Explained: How to tell if artificial intelligence is working the way we want it to

#artificialintelligence

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer. These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain. As the field of machine learning has grown, artificial neural networks have grown along with it. Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them don't fully understand how they work.


AI Machine-Learning: In Bias We Trust?

#artificialintelligence

MIT researchers find that the explanation methods designed to help users determine whether to trust a machine-learning model's predictions can perpetuate biases and lead to worse outcomes for people from disadvantaged groups. According to a new study, explanation methods that help users determine whether to trust machine-learning model predictions can be less accurate for disadvantaged subgroups. Machine-learning algorithms are sometimes employed to assist human decision-makers when the stakes are high. For example, a model may predict which law school candidates are most likely to pass the bar exam, assisting admissions officers in deciding which students to admit. Because of the complexity of these models, often having millions of parameters, it is nearly impossible for AI researchers to fully understand how they make predictions.


Exploring emerging topics in artificial intelligence policy

#artificialintelligence

Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies. The virtual event, hosted by the AI Policy Forum (AIPF) -- an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing -- brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility. In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries -- most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own. Each of these developments represents a different approach to legislating AI, but what makes a good AI law?


'It's not going to work': Keeping race out of machine learning isn't enough to avoid bias

#artificialintelligence

As more machine learning tools reach patients, developers are starting to get smart about the potential for bias to seep in. But a growing body of research aims to emphasize that even carefully trained models -- ones built to ignore race -- can breed inequity in care. Researchers at the Massachusetts Institute of Technology and IBM Research recently showed that algorithms based on clinical notes -- the free-form text providers jot down during patient visits -- could predict the self-identified race of a patient, even when the data had been stripped of explicit mentions of race. It's a clear sign of a big problem: Race is so deeply embedded in clinical information that straightforward approaches like race redaction won't cut it when it comes to making sure algorithms aren't biased. "People have this misconception that if they just include race as a variable or don't include race as variable, it's enough to deem a model to be fair or unfair," said Suchi Saria, director of the machine learning and health care lab at Johns Hopkins University and CEO of Bayesian Health.


MIT Researcher Explores The Downside Of Machine Learning In Healthcare - Liwaiwai

#artificialintelligence

While working toward her dissertation in Computer Science, Marzyeh Ghassemi PhD '17 wrote some papers on how machine learning techniques from AI could be applied to clinical data in order to predict patient outcomes. "It wasn't until the end of my PhD work that one of my committee members asked: 'Did you ever check to see how well your model worked across different groups of people?'" That question was eye-opening for Ghassemi, who had previously assessed the performance of models in aggregate, across all patients. Upon a closer look, she saw that models often worked differently, specifically worse, for minorities like black women--a revelation that took her by surprise. "I hadn't made the connection beforehand that health disparities would translate directly to model disparities," she says. "And given that I am a visible minority woman-identifying computer scientist at MIT, I am reasonably certain that many others weren't aware of this either."


Hidden biases in medical data could compromise AI approaches to healthcare

#artificialintelligence

While working toward her dissertation in computer science at MIT, Marzyeh Ghassemi wrote several papers on how machine-learning techniques from artificial intelligence could be applied to clinical data in order to predict patient outcomes. "It wasn't until the end of my Ph.D. work that one of my committee members asked: "Did you ever check to see how well your model worked across different groups of people?'" That question was eye-opening for Ghassemi, who had previously assessed the performance of models in aggregate, across all patients. Upon a closer look, she saw that models often worked differently--specifically worse--for populations including Black women, a revelation that took her by surprise. "I hadn't made the connection beforehand that health disparities would translate directly to model disparities," she says. "And given that I am a visible minority woman-identifying computer scientist at MIT, I am reasonably certain that many others weren't aware of this either." In a paper published Jan. 14 in the journal Patterns, Ghassemi--who earned her doctorate in 2017 and is now an assistant professor in the Department of Electrical Engineering and Computer Science and the MIT Institute for Medical Engineering and Science (IMES)--and her coauthor, Elaine Okanyene Nsoesie of Boston University, offer a cautionary note about the prospects for AI in medicine. "If used carefully, this technology could improve performance in health care and potentially reduce inequities," Ghassemi says. "But if we're not actually careful, technology could worsen care." It all comes down to data, given that the AI tools in question train themselves by processing and analyzing vast quantities of data. But the data they are given are produced by humans, who are fallible and whose judgments may be clouded by the fact that they interact differently with patients depending on their age, gender, and race, without even knowing it. Furthermore, there is still great uncertainty about medical conditions themselves. "Doctors trained at the same medical school for 10 years can, and often do, disagree about a patient's diagnosis," Ghassemi says. That's different from the applications where existing machine-learning algorithms excel--like object-recognition tasks--because practically everyone in the world will agree that a dog is, in fact, a dog. Machine-learning algorithms have also fared well in mastering games like chess and Go, where both the rules and the "win conditions" are clearly defined. Physicians, however, don't always concur on the rules for treating patients, and even the win condition of being "healthy" is not widely agreed upon. "Doctors know what it means to be sick," Ghassemi explains, "and we have the most data for people when they are sickest.


The downside of machine learning in health care

#artificialintelligence

While working toward her dissertation in computer science at MIT, Marzyeh Ghassemi wrote several papers on how machine-learning techniques from artificial intelligence could be applied to clinical data in order to predict patient outcomes. "It wasn't until the end of my PhD work that one of my committee members asked: 'Did you ever check to see how well your model worked across different groups of people?'" That question was eye-opening for Ghassemi, who had previously assessed the performance of models in aggregate, across all patients. Upon a closer look, she saw that models often worked differently -- specifically worse -- for populations including Black women, a revelation that took her by surprise. "I hadn't made the connection beforehand that health disparities would translate directly to model disparities," she says. "And given that I am a visible minority woman-identifying computer scientist at MIT, I am reasonably certain that many others weren't aware of this either."