Goto

Collaborating Authors

 medical decision



A new law in this state bans automated insurance claim denials

FOX News

'Ask Dr. Drew' host Dr. Drew Pinsky breaks down key takeaways from the MAHA Commission's chronic disease report on'The Ingraham Angle.' As some health insurance companies have come under fire for allegedly using computer systems to shoot down claims, an Arizona law will soon make the practice illegal in the Grand Canyon State. Republican Arizona House Majority Whip Rep. Julie Willoughby sponsored the legislation, and it was recently signed into law by Democratic Gov. Katie Hobbs. House Bill 2175 requires a physician licensed in the state to conduct an "individual review" and use "independent medical judgment" to determine whether the claim should actually be denied. It also required a similar review of "a direct denial of a prior authorization of a service" that a provider asked for and "involves medical necessity."


MedDec: A Dataset for Extracting Medical Decisions from Discharge Summaries

Elgaar, Mohamed, Cheng, Jiali, Vakil, Nidhi, Amiri, Hadi, Celi, Leo Anthony

arXiv.org Artificial Intelligence

Medical decisions directly impact individuals' health and well-being. Extracting decision spans from clinical notes plays a crucial role in understanding medical decision-making processes. In this paper, we develop a new dataset called "MedDec", which contains clinical notes of eleven different phenotypes (diseases) annotated by ten types of medical decisions. We introduce the task of medical decision extraction, aiming to jointly extract and classify different types of medical decisions within clinical notes. We provide a comprehensive analysis of the dataset, develop a span detection model as a baseline for this task, evaluate recent span detection approaches, and employ a few metrics to measure the complexity of data samples. Our findings shed light on the complexities inherent in clinical decision extraction and enable future work in this area of research. The dataset and code are available through https://github.com/CLU-UML/MedDec.


Can We Trust ChatGPT and Artificial Intelligence to Do Humans' Work?

#artificialintelligence

An AI-generated image created using the prompt: "artificial intelligence in the future." Image created by Shutterstock's AI image generator This article was not written by ChatGPT. The artificial intelligence–powered chatbot--which can generate essays and articles with a simple prompt, have natural-sounding conversations, debug computer code, write songs, and even draft Congressional floor speeches--has quickly become a phenomenon. Developed by the Microsoft-backed OpenAI, the computer program reportedly hit 100 million users in January alone and has been called an AI breakthrough. Its apparent prowess--in one study, it fooled respected scientists into believing its fake research paper abstracts--has left professional writers feeling nervous and spooked Google into urgently ramping up its own AI efforts.


Here's How An Algorithm Guides A Medical Decision - AI Summary

#artificialintelligence

Artificial intelligence tools are complicated computer programs that suck in vast amounts of data, search for patterns or trajectories, and make a prediction or recommendation to help guide a decision. Patients don't need to understand these algorithms at a data-scientist level, but it's still useful for people to have a general idea of how AI-based healthcare tools work, says Suresh Balu, program director at the Duke Institute for Health Innovation. Some patients can get a little jumpy when they hear algorithms are being used in their care, says Mark Sendak, a data scientist at the Duke Institute for Health Innovation. We picked an algorithm that flags patients in the early stages of sepsis -- a life-threatening complication from an infection that results in widespread inflammation through the body. The algorithm we're looking at underpins a program called Sepsis Watch, which Sendak and Balu helped develop at Duke University.


Here's how an algorithm guides a medical decision

#artificialintelligence

Artificial intelligence algorithms are everywhere in healthcare. They sort through patients' data to predict who will develop medical conditions like heart disease or diabetes, they help doctors figure out which people in an emergency room are the sickest, and they screen medical images to find evidence of diseases. But even as AI algorithms become more important to medicine, they're often invisible to people receiving care. Artificial intelligence tools are complicated computer programs that suck in vast amounts of data, search for patterns or trajectories, and make a prediction or recommendation to help guide a decision. Sometimes, the way algorithms process all of the information they're taking in is a black box -- inscrutable even to the people who designed the program.


The First AI Breast Cancer Sleuth That Shows Its Work

#artificialintelligence

Computer engineers and radiologists at Duke University have developed an artificial intelligence platform to analyze potentially cancerous lesions in mammography scans to determine if a patient should receive an invasive biopsy. But unlike its many predecessors, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions. The researchers trained the AI to locate and evaluate lesions just like an actual radiologist would be trained, rather than allowing it to freely develop its own procedures, giving it several advantages over its "black box" counterparts. It could make for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better health care decisions.


The first AI breast cancer sleuth that shows its work

#artificialintelligence

Computer engineers and radiologists at Duke University have developed an artificial intelligence platform to analyze potentially cancerous lesions in mammography scans to determine if a patient should receive an invasive biopsy. But unlike its many predecessors, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions. The researchers trained the AI to locate and evaluate lesions just like an actual radiologist would be trained, rather than allowing it to freely develop its own procedures, giving it several advantages over its "black box" counterparts. It could make for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better health care decisions.


For Patients to Trust Medical AI, They Need to Understand It

#artificialintelligence

Artificial intelligence-enabled health applications for diagnostic care are becoming widely available to consumers; some can even be accessed via smartphones. Google, for instance, recently announced its entry into this market with an AI-based tool that helps people identify skin, hair, and nail conditions. A major barrier to the adoption of these technologies, however, is that consumers tend to trust medical AI less than human health care providers. They believe that medical AI fails to cater to their unique needs and performs worse than comparable human providers, and they feel that they cannot hold AI accountable for mistakes in the same way they could a human. This resistance to AI in the medical domain poses a challenge to policymakers who wish to improve health care and to companies selling innovative health services.


The Ethical Implications of Shared Medical Decision Making without Providing Adequate Computational Support to the Care Provider and to the Patient

Shahar, Yuval

arXiv.org Artificial Intelligence

There is a clear need to involve patients in medical decisions. However, cognitive psychological research has highlighted the cognitive limitations of humans with respect to 1. Probabilistic assessment of the patient state and of potential outcomes of various decisions, 2. Elicitation of the patient utility function, and 3. Integration of the probabilistic knowledge and of patient preferences to determine the optimal strategy. Therefore, without adequate computational support, current shared decision models have severe ethical deficiencies. An informed consent model unfairly transfers the responsibility to a patient who does not have the necessary knowledge, nor the integration capability. A paternalistic model endows with exaggerated power a physician who might not be aware of the patient preferences, is prone to multiple cognitive biases, and whose computational integration capability is bounded. Recent progress in Artificial Intelligence suggests adding a third agent: a computer, in all deliberative medical decisions: Non emergency medical decisions in which more than one alternative exists, the patient preferences can be elicited, the therapeutic alternatives might be influenced by these preferences, medical knowledge exists regarding the likelihood of the decision outcomes, and there is sufficient decision time. Ethical physicians should exploit computational decision support technologies, neither making the decisions solely on their own, nor shirking their duty and shifting the responsibility to patients in the name of informed consent. The resulting three way (patient, care provider, computer) human machine model that we suggest emphasizes the patient preferences, the physician knowledge, and the computational integration of both aspects, does not diminish the physician role, but rather brings out the best in human and machine.