How Should The FDA Go About Regulating Adaptive AI? - AI Summary
Picture this: As a Covid-19 patient fights for her life on a ventilator, software powered by artificial intelligence analyzes her vital signs and sends her care providers drug-dosing recommendations -- even as the same software simultaneously analyzes in real time the vital signs of thousands of other ventilated patients across the country to learn more about how the dosage affects their care and automatically implements improvements to its drug-dosing algorithm. When an algorithm encounters a real-world clinical setting, adaptive AI might allow it to learn from these new data and incorporate clinician feedback to optimize its performance. Instead of being unleashed, artificial self-control lets a manufacturer put adaptive AI on a longer leash, allowing the algorithm to explore within a defined space to find the optimal operating point. When the algorithm is ready to incorporate what it has learned from real-world data about how drug-dosing information has affected other patients on ventilators, it first goes through a controlled revalidation process, automatically testing its performance on a random sample from a large representative test dataset in the cloud, a dataset that has been carefully curated by the manufacturer to ensure it is representative of the overall population and has high quality information about drug-dosing and patient outcomes. The test is logged, and each data point used in the test is carefully controlled to ensure that the algorithm is not simply getting better and better at predicting the answer in a small test set (a common problem in machine learning called overfitting) but is instead truly improving its performance.
Nov-20-2022, 00:50:06 GMT