Goto

Collaborating Authors

 supplemental oxygen


Why it's a problem that pulse oximeters don't work as well on patients of color

#artificialintelligence

Pulse oximetry is a noninvasive test that measures the oxygen saturation level in a patient's blood, and it has become an important tool for monitoring many patients, including those with Covid-19. But new research links faulty readings from pulse oximeters with racial disparities in health outcomes, potentially leading to higher rates of death and complications such as organ dysfunction, in patients with darker skin. It is well known that non-white intensive care unit (ICU) patients receive less-accurate readings of their oxygen levels using pulse oximeters -- the common devices clamped on patients' fingers. Now, a paper co-authored by MIT scientists reveals that inaccurate pulse oximeter readings can lead to critically ill patients of color receiving less supplemental oxygen during ICU stays. The paper, "Assessment of Racial and Ethnic Differences in Oxygen Supplementation Among Patients in the Intensive Care Unit," published in JAMA Internal Medicine, focused on the question of whether there were differences in supplemental oxygen administration among patients of different races and ethnicities that were associated with pulse oximeter performance discrepancies.


New Study uses Federated Learning to Predict Covid-19 Outcomes

#artificialintelligence

Many ethical and legal challenges surround COVID-19 data analysis, including data ownership, data security, and privacy issues. As a result, healthcare providers have typically preferred models validated on their own data. However, this limits the scope of analysis that can be performed, often resulting in AI models that lack diversity, suffer from overfitting, and demonstrate poor generalization. One recent study titled Federated learning for predicting clinical outcomes in patients with COVID-19, published in September 15 issue of Nature Medicine [1], offered a solution to these problems: Federated Learning (FL). FL is a privacy-protection model trained in heterogeneous, distributed networks [2].


Researchers call for bias-free artificial intelligence

#artificialintelligence

Clinicians and surgeons are increasingly using medical devices based on artificial intelligence. These AI devices, which rely on data-driven algorithms to inform health care decisions, presently aid in diagnosing cancers, heart conditions and diseases of the eye, with many more applications on the way. In a new study, Stanford faculty discuss sex, gender and race bias in medical technologies. Pulse oximeters, for example, are more likely to incorrectly report blood gas levels in dark-skinned individuals and in women. Given this surge in AI, two Stanford University faculty members are calling for efforts to ensure that this technology does not exacerbate existing heath care disparities.


Facebook claims its AI can predict four if a coronavirus patient's condition will deteriorate

Daily Mail - Science & tech

Facebook claims to have designed software capable of predicting if a coronavirus patient's health will deteriorate or will need oxygen just by scanning their chest X-rays. Working with New York University (NYU), the social media firm says the system can calculate such developments four days. Together they have built three machine-learning models to assist doctors better prepare as cases around the world continue to rise. One model is designed to predict deterioration using a single chest X-ray, another does the same but through a series of X-rays and the third uses an X-ray to determine if and how much supplemental oxygen a patient may need. Facebook and NYU built three machine-learning models to assist doctors better prepare as cases around the world continue to rise.