Goto

Collaborating Authors

Clinical Informatics


How to build trust with Trusts on artificial intelligence

#artificialintelligence

Dr Venkat Reddy, consultant neurodevelopmental paediatrician, senior clinical adviser and AI lead at Future Perfect, discusses how AI-enabled analysis of healthcare data can both help clinicians, and encourage patients to be more engaged in their own care. In general, and as a clinician myself, I believe there is a lack of trust between clinicians and the use of AI. Aside from the few clinicians with an interest in clinical informatics and digital health, views are still largely shaped by newspaper headlines about killer robots. Unfortunately, there has been concern over the use of algorithms due to recent events. Not to mention the negative press about the use, or misuse, of AI by social media giants to gather information and'snoop on people'.


Amazon is cozying up in all corners of the healthcare ecosystem--AI is its next frontier

#artificialintelligence

Amazon Web Services (AWS) launched Amazon HealthLake--a new HIPAA-eligible platform that lets healthcare organizations seamlessly store, transform, and analyze data in the cloud. The platform standardizes unstructured clinical data (like clinical notes or imaging info) by in a way that makes it easily accessible and unlocks meaningful insights--an otherwise complex and error-prone process. For example, Amazon HealthLake can match patients to clinical trials, analyze population health trends, improve clinical decision-making, and optimize hospital operations. Amazon already has links in different parts of the healthcare ecosystem--now that it's taking on healthcare AI, smaller players like Nuance and Notable Health should be worried. Amazon has inroads in everything from pharmacy to care delivery: Amazon Pharmacy was built upon its partnerships with payers like Blue Cross Blue Shield and Horizon Healthcare Services, Amazon Care was expanded to all Amazon employees in Washington state this September, and it launched its Amazon Halo wearable in August.


UVA Artificial Intelligence Project Among 7 Finalists for $1 Million Prize

#artificialintelligence

A UVA Health data science team is one of seven finalists in a national competition to improve healthcare with the help of artificial intelligence. UVA's proposal was selected as a finalist from among more than 300 applicants in the first-ever Centers for Medicare & Medicaid Services (CMS) Artificial Intelligence Health Outcomes Challenge. UVA's project predicts which patients are at risk for adverse outcomes and then suggests a personalized plan to ensure appropriate healthcare delivery and avoid unnecessary hospitalizations. CMS selected the seven finalists after reviewing the accuracy of their artificial intelligence models and evaluating how well healthcare providers could use visual displays created by each project team to improve outcomes and patient care. Each team of finalists received $60,000 and will compete for a grand prize of up to $1 million.


AI in healthcare: navigating uncharted territory

#artificialintelligence

AI is undoubtedly changing the healthcare industry, making it more efficient and driving better outcomes for patients. COVID-19 has served as an accelerator of adoption – a catalyst in helping the industry catapult itself forward, taking advantage of the best technology has to offer. Barriers to adoption persist, however, as many applications of AI in healthcare remain uncharted territory. The vast majority of the world's health systems are not using their data and AI to make helpful predictions that inform decision making, creating tremendous opportunity to use data and AI to help make more insightful healthcare decisions. But the challenge is in finding common, replicable use cases. To start, healthcare providers are looking to understand how the disparate clinical data they gather can be organised better into an efficient pipeline that can be used to tap into accurate, predictive data intelligence.


First clinical AI tool to let patients sleep/recover developed

#artificialintelligence

Vital sign (VS) monitoring disruptions for hospitalized patients during overnight hours have been linked to cognitive impairment, hypertension, increased stress and even mortality. For the first time, a team at The Feinstein Institutes for Medical Research developed a deep-learning predictive clinical tool to identify which patients do not need to be woken up overnight – allowing them to rest, recover and discharge faster. The study's results, based on 24.3 million vital sign measurements, were published today in Nature Partner Journals Digital Medicine. A team, led by Theodoros Zanos, PhD, in close collaboration with Jamie Hirsch, MD, collected and analyzed data from multiple Northwell Health hospitals between 2012 and 2019, which consisted of 2.13 million patient visits. They used this vast body of clinical data from the patient visits – respiratory rate, heart rate, systolic blood pressure, body temperature, patient age, etc. – to develop an algorithm that predicts a hospitalized patient's overnight stability, and if they could be left uninterrupted overnight to sleep.


Healthdata Bootcamp – Workshops and practical lectures from Startups that work with health data

#artificialintelligence

Mikhail have a background in different fields, including data science/machine learning, robotics, product development and marketing. Last few years he has worked on data science projects as a developer and team lead. His interests cover topics in Data Science & Machine Learning project development processes, automated pipelines, reproducibility, experiments and model management. These topics are applied to use cases in recommendation systems, computer vision and NLP applications.


FairLens: Auditing Black-box Clinical Decision Support Systems

arXiv.org Artificial Intelligence

The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of biased models is a very delicate task which should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system. In this scenario, the healthcare facility experts can use FairLens on their own historical data to discover the model's biases before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to attributes such as age, ethnicity, gender and insurance; it then assesses the model performance on such subgroups of patients identifying those in need of expert evaluation. Finally, building on recent state-of-the-art XAI (eXplainable Artificial Intelligence) techniques, FairLens explains which elements in patients' clinical history drive the model error in the selected subgroup. Therefore, FairLens allows experts to investigate whether to trust the model and to spotlight group-specific biases that might constitute potential fairness issues.


Health care needs ethics-based governance of artificial intelligence - STAT

#artificialintelligence

Artificial intelligence has the potential to transform health care. It can enable health care professionals to analyze health data quickly and precisely, and lead to better detection, treatment, and prevention of a multitude of physical and mental health issues. Artificial intelligence integrated with virtual care -- telemedicine and digital health -- interventions are playing a vital role in responding to Covid-19. Penn Medicine, for example, has designed a Covid-19 chatbot to stratify patients and facilitate triage. Penn is also using machine learning to identify patients at risk for sepsis.


Microsoft Cloud for Healthcare: Unlocking the power of health data for better care

#artificialintelligence

As healthcare providers have faced unprecedented workloads (individually and institutionally) around the world, the pandemic response continues to cause seismic shifts in how, where, and when care is provided. Longer-term, it has revealed the need for fundamental shifts across the care continuum. As a physician, I have seen first-hand the challenges of not having the right data, at the right time, in the right format to make informed shared decisions with my patients. These challenges amplify the urgency for trusted partners and solutions to help solve emergent health challenges. Today we're taking a big step forward to address these challenges with the general availability of Microsoft Cloud for Healthcare.


AIMed UK 2020: Considerations to have in deploying healthcare AI at scale

#artificialintelligence

AIMed UK 2020 virtual summit took place early on. In the opening keynote session: Deployment of artificial intelligence (AI) in the UK and across the world, Professor Neil Sebire, Chief Research Information Officer at the Great Ormond Street Hospital for Children National Health Service (NHS) Foundation Trust talked about some of the considerations healthcare organization need to have as they plan to deploy AI tools at scale. Professor Sebire said healthcare organization ought to think about what is required, in terms of infrastructure, when it comes to dealing with healthcare data. Often, it's great to have talks focusing on electronic health records (EHRs) but these data warehouses do not facilitate utilization. What the healthcare system needs is a place which not only keeps all the data but also permits algorithm development; planning the deployment and scaling of AI, and everything else.