AI is being increasingly incorporated by doctors to transcribe, read, analyze, and make predictions based on notes and conversations between physicians and their patients. This opens up new possibilities for care and new concerns about privacy, according to a recent account from Axios. A big and largely invisible contribution AI can make is to capture a physician's written or spoken notes automatically. Spending hours entering data manually into electronic health records (EHRs) is not helpful to medical professionals close to burning out. A recent study from researchers at the University of New Mexico, outlined in EHR Intelligence, found that 13% of stress and burnout self-reported by physicians were directly correlated to EHRs.
See article by Horng et al in this issue. William F. Auffermann, MD, PhD, is an associate professor of radiology and imaging sciences at the University of Utah School of Medicine. Dr Auffermann is a cardiothoracic radiologist and is ABPM board certified in clinical informatics. His research interests include imaging informatics, clinical informatics, applications of AI in radiology, medical image perception, and perceptual training. Recent research projects include image annotation for AI using eye tracking, human factors engineering, and developing simulation-based perceptual training methods to facilitate radiology education.
Many professionals in healthcare information technology might not yet be familiar with radiomics, but the emerging technology could potentially have a big impact on the future of cancer care. Radiomics might be most easily explained as "imaging analytics." The technology uses AI and machine learning to extract high-dimensional data from standard medical images such as CT, MRI and PET scans to provide more than 1,500 data points that deliver new insights about the tumors or lesions in those images that cannot be obtained via traditional approaches. Healthcare IT News asked Rose Higgins, CEO of radiomics pioneer HealthMyne, for a primer on radiomics, and to discuss its place in the realm of health IT. Q: Please explain what radiomics is, and how it works.
Dr Venkat Reddy, consultant neurodevelopmental paediatrician, senior clinical adviser and AI lead at Future Perfect, discusses how AI-enabled analysis of healthcare data can both help clinicians, and encourage patients to be more engaged in their own care. In general, and as a clinician myself, I believe there is a lack of trust between clinicians and the use of AI. Aside from the few clinicians with an interest in clinical informatics and digital health, views are still largely shaped by newspaper headlines about killer robots. Unfortunately, there has been concern over the use of algorithms due to recent events. Not to mention the negative press about the use, or misuse, of AI by social media giants to gather information and'snoop on people'.
Amazon Web Services (AWS) launched Amazon HealthLake--a new HIPAA-eligible platform that lets healthcare organizations seamlessly store, transform, and analyze data in the cloud. The platform standardizes unstructured clinical data (like clinical notes or imaging info) by in a way that makes it easily accessible and unlocks meaningful insights--an otherwise complex and error-prone process. For example, Amazon HealthLake can match patients to clinical trials, analyze population health trends, improve clinical decision-making, and optimize hospital operations. Amazon already has links in different parts of the healthcare ecosystem--now that it's taking on healthcare AI, smaller players like Nuance and Notable Health should be worried. Amazon has inroads in everything from pharmacy to care delivery: Amazon Pharmacy was built upon its partnerships with payers like Blue Cross Blue Shield and Horizon Healthcare Services, Amazon Care was expanded to all Amazon employees in Washington state this September, and it launched its Amazon Halo wearable in August.
A UVA Health data science team is one of seven finalists in a national competition to improve healthcare with the help of artificial intelligence. UVA's proposal was selected as a finalist from among more than 300 applicants in the first-ever Centers for Medicare & Medicaid Services (CMS) Artificial Intelligence Health Outcomes Challenge. UVA's project predicts which patients are at risk for adverse outcomes and then suggests a personalized plan to ensure appropriate healthcare delivery and avoid unnecessary hospitalizations. CMS selected the seven finalists after reviewing the accuracy of their artificial intelligence models and evaluating how well healthcare providers could use visual displays created by each project team to improve outcomes and patient care. Each team of finalists received $60,000 and will compete for a grand prize of up to $1 million.
AI is undoubtedly changing the healthcare industry, making it more efficient and driving better outcomes for patients. COVID-19 has served as an accelerator of adoption – a catalyst in helping the industry catapult itself forward, taking advantage of the best technology has to offer. Barriers to adoption persist, however, as many applications of AI in healthcare remain uncharted territory. The vast majority of the world's health systems are not using their data and AI to make helpful predictions that inform decision making, creating tremendous opportunity to use data and AI to help make more insightful healthcare decisions. But the challenge is in finding common, replicable use cases. To start, healthcare providers are looking to understand how the disparate clinical data they gather can be organised better into an efficient pipeline that can be used to tap into accurate, predictive data intelligence.
Vital sign (VS) monitoring disruptions for hospitalized patients during overnight hours have been linked to cognitive impairment, hypertension, increased stress and even mortality. For the first time, a team at The Feinstein Institutes for Medical Research developed a deep-learning predictive clinical tool to identify which patients do not need to be woken up overnight – allowing them to rest, recover and discharge faster. The study's results, based on 24.3 million vital sign measurements, were published today in Nature Partner Journals Digital Medicine. A team, led by Theodoros Zanos, PhD, in close collaboration with Jamie Hirsch, MD, collected and analyzed data from multiple Northwell Health hospitals between 2012 and 2019, which consisted of 2.13 million patient visits. They used this vast body of clinical data from the patient visits – respiratory rate, heart rate, systolic blood pressure, body temperature, patient age, etc. – to develop an algorithm that predicts a hospitalized patient's overnight stability, and if they could be left uninterrupted overnight to sleep.
Mikhail have a background in different fields, including data science/machine learning, robotics, product development and marketing. Last few years he has worked on data science projects as a developer and team lead. His interests cover topics in Data Science & Machine Learning project development processes, automated pipelines, reproducibility, experiments and model management. These topics are applied to use cases in recommendation systems, computer vision and NLP applications.
The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of biased models is a very delicate task which should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system. In this scenario, the healthcare facility experts can use FairLens on their own historical data to discover the model's biases before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to attributes such as age, ethnicity, gender and insurance; it then assesses the model performance on such subgroups of patients identifying those in need of expert evaluation. Finally, building on recent state-of-the-art XAI (eXplainable Artificial Intelligence) techniques, FairLens explains which elements in patients' clinical history drive the model error in the selected subgroup. Therefore, FairLens allows experts to investigate whether to trust the model and to spotlight group-specific biases that might constitute potential fairness issues.