An ounce of prevention is worth a pound of cure, as the old saying goes. Until recently, that simply meant living a healthy lifestyle, getting regular checkups, and hoping that signs of anything serious were caught early. But today, doctors are using artificial intelligence (AI) and machine learning systems to make preventative care, diagnosis, and treatment more accurate and effective than ever. "Machine learning involves adaptive learning and as such, can identify patterns over time as new data is aggregated and analyzed," explains Melissa Manice, co-founder of healthcare startup Cohero Health. "Therefore, machine learning and AI allows doctors to detect abnormal behaviors and predictive insights with the application of clinical thresholds to machine learning algorithms," she continues.
Robert Wachter, a former member of Google's healthcare advisory board, remembers when the company first set its sights on the healthcare industry more than a decade ago. "They said: We're Google, we'll solve it," says Wachter, head of medicine at University of California, San Francisco. At the time, Google was trying to create individual accounts where users could store their electronic medical records. So when then-chief executive Eric Schmidt later abandoned the effort with an admission that Google had underestimated the challenge, it came as a shock. "They conquer industry after industry, it doesn't seem like this would be very different," Wachter says.
Researchers from Monash University have undertaken a study using artificial intelligence (AI) to understand the reasons behind hospital readmissions. According to the university, the study involved using AI technology to examine 10 years' worth of patient records. Specifically, that included 14,000 patient medical records and the details of over 327,000 hospital readmissions. Project lead Wray Buntine, who is also a professor of IT and data science at Monash University Faculty of IT, said the decision to carry out the research was underpinned by the need to lower hospital costs and improve the quality of care at hospitals. "This study utilised a rich source of clinical patient data to infer medical risk predictions and improve the quality of patient healthcare," he said.
The AI in healthcare market is projected to expand from its current $2.1 billion to $36.1 billion in 2025, representing a staggering compound annual growth rate (CAGR) of 50.2 percent. That's according to new research from ReportLinker, which notes that the rapid increase in value will be driven largely by North American investment, with the United States at the forefront of innovation and spending. Hospitals and physician providers will be the major investors in machine learning and artificial intelligence solutions and services, the report predicts. "A few major factors responsible for the high share of the hospitals and providers segment include a large number of applications of AI solutions across provider settings; ability of AI systems to improve care delivery, patient experience, and bring down costs; and growing adoption of electronic health records by healthcare organizations," noted the summary of the report. "Moreover, AI-based tools, such as voice recognition software and clinical decision support systems, help streamline workflow processes in hospitals, lower cost, improve care delivery, and enhance patient experience."
As more physicians are taking their practices online, software companies have also had to adjust their services. One example: Saykara, a startup developing an AI voice assistant to automatically fill health records, had to shift its platform to Zoom. In early March, Saykara celebrated a milestone when its AI voice assistant was able to operate autonomously, meaning for some specialties, it could automatically update patient records and notes without any clicks or voice commands. But a few weeks later, the Seattle-based startup had to quickly adjust to a new world where most appointments are being conducted online. "Things were growing every day until we had the hiccup of Covid thrown in there," said Dr. Graham Hughes, president and COO of Saykara.
Machine learning experts working at Google Health have published a new study in tandem with the University of California San Francisco's (UCSF) computational health sciences department that describes a machine learning model the researchers built that can anticipate normal physician drug prescribing patterns, using a patient's electronic health records (EHR) as input. That's useful because around 2% of patients who end up hospitalized are affected by preventable mistakes in medication prescriptions, some instances of which can even lead to death. The researchers describe the system as working in a similar manner to automated, machine learning-based fraud detection tools that are commonly used by credit card companies to alert customers of possible fraudulent transactions: They essentially build a baseline of what's normal consumer behavior based on past transactions, and then alert your bank's fraud department or freeze access when they detect a behavior that is not in line with an individual's baseline behavior. Similarly, the model trained by Google and UCSF worked by identifying any prescriptions that "looked abnormal for the patient and their current situation." That's a much more challenging proposition in the case of prescription drugs versus consumer activity -- because courses of medication, their interactions with one another and the specific needs, sensitivities and conditions of any given patient all present an incredibly complex web to untangle.
Drawing on the records of nearly 600,000 Chinese patients who had visited a pediatric hospital over an 18-month period, the vast collection of data used to train this new system highlights an advantage for China in the worldwide race toward artificial intelligence. Because its population is so large -- and because its privacy norms put fewer restrictions on the sharing of digital data -- it may be easier for Chinese companies and researchers to build and train the "deep learning" systems that are rapidly changing the trajectory of health care. On Monday, President Trump signed an executive order meant to spur the development of A.I. across government, academia and industry in the United States. As part of this "American A.I. Initiative," the administration will encourage federal agencies and universities to share data that can drive the development of automated systems. Pooling health care data is a particularly difficult endeavor.
This article explores some new and emerging applications of text analytics and NLP in healthcare. Each application demonstrates how HCPs and others use natural language processing to mine unstructured text-based healthcare data and then do something with the results. Healthcare databases are growing exponentially, and text analytics and natural language processing (NLP) systems turn this data into value. Healthcare providers, pharmaceutical companies and biotechnology firms all use text analytics and NLP to improve patient outcomes, streamline operations, and manage regulatory compliance. Patient health records, order entries, and physician notes aren't the only sources of data in healthcare.
Electronic Health Records (EHRs) provide vital contextual information to radiologists and other physicians when making a diagnosis. Unfortunately, because a given patient's record may contain hundreds of notes and reports, identifying relevant information within these in the short time typically allotted to a case is very difficult. We propose and evaluate models that extract relevant text snippets from patient records to provide a rough case summary intended to aid physicians considering one or more diagnoses. This is hard because direct supervision (i.e., physician annotations of snippets relevant to specific diagnoses in medical records) is prohibitively expensive to collect at scale. We propose a distantly supervised strategy in which we use groups of International Classification of Diseases (ICD) codes observed in 'future' records as noisy proxies for 'downstream' diagnoses. Using this we train a transformer-based neural model to perform extractive summarization conditioned on potential diagnoses. This model defines an attention mechanism that is conditioned on potential diagnoses (queries) provided by the diagnosing physician. We train (via distant supervision) and evaluate variants of this model on EHR data from a local hospital and MIMIC-III (the latter to facilitate reproducibility). Evaluations performed by radiologists demonstrate that these distantly supervised models yield better extractive summaries than do unsupervised approaches. Such models may aid diagnosis by identifying sentences in past patient reports that are clinically relevant to a potential diagnoses.