ferryman
Data can be a 'force for evil,' AI and machine learning experts say
The COVID-19 pandemic has highlighted and exacerbated existing disparities in the healthcare system, including the consequences of bias on racialized or marginalized groups. Some of the ways racial bias in the healthcare system emerge are more obvious, such as horror stories of Black people being turned away at emergency departments. Others, experts said during the HIMSS Machine Learning and AI for Healthcare Digital Summit this week, are less visible – but can still be incredibly harmful. "There are other ways this bias manifests structurally that are not as potentially sort of obvious," said Kadija Ferryman, industry assistant professor of ethics and engineering, NYU Tandon School of Engineering, at a panel on Tuesday. "That is through informatics and data."
Data can be a 'force for evil,' AI and machine learning experts say
BEGIN ARTICLE PREVIEW: The COVID-19 pandemic has highlighted and exacerbated existing disparities in the healthcare system, including the consequences of bias on racialized or marginalized groups.Some of the ways racial bias in the healthcare system emerge are more obvious, such as horror stories of Black people being turned away at emergency departments. Others, experts said during the HIMSS Machine Learning and AI for Healthcare Digital Summit this week, are less visible – but can still be incredibly harmful. HIMSS20 Digital Learn on-demand, earn credit, find products and solutions. Get Started >> “There are other ways this bias manifests structurally that are not as potentially sort of obvious,” said Kadija Ferryman, industry assistant professor of ethics and engineering, NYU Tandon School of Engineering, at a panel on Tuesday. “That is through informatics and data.” For instance, COVID-19 is a disease that attacks
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.53)
- Health & Medicine > Therapeutic Area > Immunology (0.53)
- Health & Medicine > Epidemiology (0.53)
Ethical Machine Learning in Health Care
Chen, Irene Y., Pierson, Emma, Rose, Sherri, Joshi, Shalmali, Ferryman, Kadija, Ghassemi, Marzyeh
The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > New York > Kings County > New York City (0.14)
- (9 more...)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (0.93)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
A fairer way forward for AI in health care
When data scientists in Chicago, Illinois, set out to test whether a machine-learning algorithm could predict how long people would stay in hospital, they thought that they were doing everyone a favour. Keeping people in hospital is expensive, and if managers knew which patients were most likely to be eligible for discharge, they could move them to the top of doctors' priority lists to avoid unnecessary delays. It would be a win–win situation: the hospital would save money and people could leave as soon as possible. Starting their work at the end of 2017, the scientists trained their algorithm on patient data from the University of Chicago academic hospital system. Taking data from the previous three years, they crunched the numbers to see what combination of factors best predicted length of stay.
- North America > United States > Illinois > Cook County > Chicago (0.47)
- Asia > India (0.05)
- North America > United States > New York (0.04)
- (7 more...)
What if AI in health care is the next asbestos? - STAT
Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison? Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. "I think of machine learning kind of as asbestos," he said. "It turns out that it's all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it's already too hard to get it all out."
- North America > United States > North Carolina (0.05)
- North America > United States > New York (0.05)
- Asia > Afghanistan (0.05)
- Health & Medicine > Therapeutic Area (1.00)
- Education > Educational Setting > Higher Education (0.55)
- Education > Curriculum > Subject-Specific Education (0.55)