Use of artificial intelligence to tackle coronavirus must go through ethical checks, say experts

#artificialintelligence 

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease. This is according to researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (CFI) in two articles, published today in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain some normalcy in 2021. Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic." "The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology." In a further paper, co-authored by CFI's Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale - predicting deterioration rates of patients who might need ventilation, for example - if it does so based on biased data. Datasets used to "train" and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of "lower socioeconomic status". "COVID-19 has already had a disproportionate impact on vulnerable communities.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found