AI Doesn't Ask Why -- But Physicians And Drug Developers Want To Know
At long last, we seem to be on the threshold of departing the earliest phases of AI, defined by the always tedious "will AI replace doctors/drug developers/occupation X?" discussion, and are poised to enter the more considered conversation of "Where will AI be useful?" As I've watched this evolution in both drug discovery and medicine, I've come to appreciate that in addition to the many technical barriers often considered, there's a critical conceptual barrier as well – the threat some AI-based approaches can pose to our "explanatory models" (a construct developed by physician-anthropologist Arthur Kleinman, and nicely explained by Dr. Namratha Kandula here), our need to ground so much of our thinking in models that mechanistically connect tangible observation and outcome. In contrast, AI relates often imperceptible observations to outcome in a fashion that's unapologetically oblivious to mechanism, which challenges physicians and drug developers by explicitly severing utility from foundational scientific understanding. A physician examines her patient and tries to integrate her observations – what she sees, feels, hears, and is told – and what she learns from laboratory and radiological tests – sodium level, CT scans – to formulate an understanding of what's wrong with her patient, and to fashion a treatment approach. The idea is that this process of understanding of what's wrong and developing a therapeutic plan is fundamentally rooted in science.
Feb-25-2019, 16:04:20 GMT
- Industry:
- Technology: