Health-related artificial intelligence needs rigorous evaluation and guardrails
Algorithms can augment human decision-making by integrating and analyzing more data, and more kinds of data, than a human can comprehend. But to realize the full potential of artificial intelligence (AI) and machine learning (ML) for patients, researchers must foster greater confidence in the accuracy, fairness, and usefulness of clinical AI algorithms. Getting there will require guardrails -- along with a commitment from AI developers to use them -- that ensure consistency and adherence to the highest standards when creating and using clinical AI tools. Such guardrails would not only improve the quality of clinical AI but would also instill confidence among patients and clinicians that all tools deployed are reliable and trustworthy. STAT, along with researchers from MIT, recently demonstrated that even "subtle shifts in data fed into popular health care algorithms -- used to warn caregivers of impending medical crises -- can cause their accuracy to plummet over time."
Mar-21-2022, 04:10:15 GMT
- Country:
- North America > United States > California
- Alameda County > Berkeley (0.05)
- San Francisco County > San Francisco (0.16)
- North America > United States > California
- Industry:
- Technology: