More than ever, medicine now aims to tailor, adjust, and personalize healthcare to individuals' and populations' specific characteristics and needs--predictively, preventively, participatorily, and dynamically--while continuously improving and learning from data both "big" and "small." Today, these data are increasingly captured from data sources both old (such as electronic medical records, EMR) and new (including smartphones, sensors, and smart devices). Combining artificial intelligence (AI) with augmented human intelligence, these new analytical approaches enable "deep learning health systems" that reach far beyond the clinic to forge research, education, and even care into the built environment and peoples' homes. The volume of biomedical research is increasing rapidly. Some is being driven by the availability and analysis of big data--the focus of this collection.
From automated eye scans to analysing the cries of new-born babies, faster drug development to personalised medicine, artificial intelligence (AI) promises huge advances in the field of healthcare. At the recent AI for Good Summit in Geneva, Switzerland, we were told how AI could speed up the development of new drugs, lead to personalised medicine informed by our genomes, and help diagnose diseases in countries suffering from underdeveloped health services and a chronic shortage of doctors. But there are two main obstacles preventing access to this utopian destination. One is that the AI being applied to the world's health problems isn't quite good enough yet. The other related issue is the lack of good quality digital data - less than 20% of the world's medical data is available in a form that AI machine learning algorithms can ingest and learn from, the WHO estimates.
SONAL SHAH: It's also about how do we make data more useful for people to use and to solve problems in their communities? TANYA OTT: Okay, that is a big job. Who is this superhuman who fills it? TANYA OTT: We'll tell you, in a moment. But first, let me say, you're listening to the Press Room, where we talk about some of the biggest issues facing businesses today. I'm Tanya Ott and joining me today are Bill Eggers … I am the executive director and a professor of practice at Georgetown University's Beeck Center. TANYA OTT: Bill and Sonal are coauthors of The CDO Playbook – a guide for Chief Data Officers. For the last decade, government has been focused on making data more open and easily [accessible] to the public.
The UK's National Health Service (NHS) has announced what it claims is a world first: a partnership with Amazon's Alexa to offer health advice from the NHS website. Britons who ask Alexa basic health questions like "Alexa, how do I treat a migraine?" The partnership does not add significantly to Alexa's skill-set, but it is an interesting step for the NHS. The UK's Department of Health (DoH) says it hopes the move will reduce the pressure on health professionals in the country, giving people a new way to access reliable medical advice. It will also benefit individuals with disabilities, like sight impairments, who may find it difficult to use computers or smartphones to find the same information.
Artificial intelligence could one day be used to tailor education to the needs of each individual child.Credit: Suzanne Kreiter/The Boston Globe/Getty People produce more than 2.5 quintillion bytes of data each day. Businesses are harnessing these riches using artificial intelligence (AI) to add trillions of dollars in value to goods and services each year. Amazon dispatches items it anticipates customers will buy to regional hubs before they are purchased. Thanks to the vast extractive might of Google and Facebook, every bakery and bicycle shop is the beneficiary of personalized targeted advertising. But governments have been slow to apply AI to hone their policies and services.
In an experiment, the AUDREY system improved communication between paramedics in the field and emergency room physicians at Kingston General Hospital. Hastings-Quinte Paramedic Chief Doug Socha tells Quinte News in the experiment AUDREY was used for a simulated male complaining of chest pains, analyzing video to assist in search and rescue operations, and linking to CCTV cameras to help detect people in a disaster situation. Socha says the Audrey system has also been shown to be quicker than a drone at finding a lost person. "Defence Research and Development Canada (DRDC) is excited to bring first responders together with technology like AUDREY to improve decision-making on the front line in our communities," said Gerry Doucette, from DRDC CSS. "Artificial intelligence and other advanced decision supports are positioned to improve patient outcomes for paramedic calls for service."
One of the biggest problems health plans face is dirty data, says Jordan Bazinsky, executive vice president and administrative officer at Atlanta-based Cotiviti. The healthcare solutions and analytics company reports having "several hundred" health insurance companies among its clientele, including 24 of the top 25 plans, he says. "Dirty data is one of the key problems that blocks health plans from finding insights from data," Bazinsky says. "You might be able to push data real-time, but if you can't trust the underlying kernel of data--all the other things can't be trusted." According to Bazinsky, Cotiviti uses data analytics to help payers achieve financial health through payment accuracy that is appropriate to the care delivered.
To take advantage of emerging software tools that incorporate artificial intelligence, healthcare organizations first need to overcome a variety of challenges. Some leading-edge organizations are beginning to do just that, focusing on machine learning, a subset of artificial intelligence (AI) that encompasses statistical methods in which computer systems recognize patterns or correlations in data by ingesting large sets of training data. They improve their performance, or "learn," over time as they incorporate new data; revising their approach as needed without human programmers updating the rules. In the healthcare industry, most machine learning applications are in the research stage. "There is not a ton of clinical use," according to Brian Edwards, independent validation consultant for AI vendors.
Augmented intelligence (AI) and related branches--such as machine learning and natural language processing--offer lots of promise for health care, but how can physicians and other health professionals distinguish between clinically safe and useful innovations and hot air? That question is at the heart of a recent JAMA Pediatrics editorial on machine learning, a branch of AI, that outlines some rules of thumb to help doctors tell the difference between hype and reliable research on machine learning in medicine. New health care AI policy adopted at the 2019 AMA Annual Meeting provides that AI should advance the quadruple aim--meaning that it "should enhance the patient experience of care and outcomes, improve population health, reduce overall costs for the health care system while increasing value, and support the professional satisfaction of physicians and the health care team." The AMA House of Delegates also adopted policy on the use of AI in medical education and physician training. This built on the foundation of the AMA's initial AI policies adopted last year that emphasized that the perspective of physicians needed to be heard as the technology continues to develop.