The artificial intelligence model showed great promise in predicting which patients treated in U.S. Veterans Affairs hospitals would experience a sudden decline in kidney function. But it also came with a crucial caveat: Women represented only about 6% of the patients whose data were used to train the algorithm, and it performed worse when tested on women. The shortcomings of that high-profile algorithm, built by the Google sister company DeepMind, highlight a problem that machine learning researchers working in medicine are increasingly worried about. And it's an issue that may be more pervasive -- and more insidious -- than experts previously realized, new research suggests. The study, led by researchers in Argentina and published Monday in the journal PNAS, found that when female patients were excluded from or significantly underrepresented in the training data used to develop a machine learning model, the algorithm performed worse in diagnosing them when tested across across a wide range of medical conditions affecting the chest area.
Jun-2-2020, 02:20:36 GMT