Lord Drayson seems a man on the move. As an amateur racing driver, it is perhaps an innate charateristic, and in the current debate around health data his foot is very much on the gas. I meet the former Labour government science and defence minister shortly after he lays out a bold new vision for the NHS at FutureScot's recent Digital Health & Care conference in Glasgow. Urbane and well-connected, Drayson is also a keen student of policy and how the arguments around big tech and health data are shaping up. For background, there is an intensifying argument that the NHS needs to make much more use of a still largely untapped goldmine of data, which could herald a new dawn in the way we diagnose, treat and manage disease – not to mention save billions of pounds annually.
We think of AI as an arbiter of neutrality, but when fed biased data it churns out biased results. At the beginning of 2017, Amazon's machine learning division shuttered an artificial intelligence (AI) project it had been working on for the past three years. A team in its machine learning wing had been building computer programmes designed to review job applicants' resumes, giving them star-ratings from one to five – not unlike the way shoppers can rate products purchased from Amazon online. However, within a year of the project beginning, the company realised its system was biased against female applicants. The software was trained to vet applicants by observing patterns in resumes submitted to the company over a ten-year period, the majority of which – due to the male-dominance of the tech industry – came from men.
Do you trust Amazon with your life? You might have to, because the big tech companies of Silicon Valley are looking to do for medicine what they've already done for retail, publishing, finance and other sectors of modern life: they want to bring on another digital revolution. Ever since the Federal government began encouraging health care providers to adopt electronic health records a decade ago, Apple, Google and a slew of Silicon Valley startups have sought to bring about their own vision of telemedicine--turbocharged by data from wearable health-monitoring devices, artificial intelligence and smartphone apps. Apple's bio-monitoring watches and Fitbit, the wearable exercise monitor recently bought by Google, are two prominent examples of products in the market now. Other companies are readying artificial-intelligence products that could augment or replace advice from medical professionals.
The next time you get sick, your care may involve a form of the technology people use to navigate road trips or pick the right vacuum cleaner online. Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. It already detects an eye disease tied to diabetes and does other behind-the-scenes work like helping doctors interpret MRI scans and other imaging tests for some forms of cancer. Now, parts of the health system are starting to use it directly with patients. During some clinic and telemedicine appointments, AI-powered software asks patients initial questions about their symptoms that physicians or nurses normally pose.
A fundamental aspect of today's artificial intelligence (AI) applications is the strategic leverage it gives users. While all business sectors will benefit from AI, the healthcare industry will see widespread adoption as Administrators and CEOs realize its potential. This is an emerging technology, and, as such, businesses that operate within the healthcare industry who begin using AI will gain a competitive advantage. The following five key steps will help define how to leverage AI in healthcare. First, users must understand what AI is and what is does.
What's in the news: Google and the 2,600-hospital Ascension health system are collaborating on an effort--dubbed Project Nightingale--that puts identifiable patient data in the hands of the tech giant's engineers for use in projects on machine learning (ML) and augmented intelligence (AI), often called artificial intelligence. The AMA is spearheading initiatives that put physicians at the center of digital health innovation. See how you can get involved. Google and Ascension say the activities, first reported by Rob Copeland of The Wall Street Journal, are covered by a business associate agreement, which is a long-standing, and legal, way for health care providers to share identifiable data with third parties under the Health Insurance Portability and Accountability Act (HIPAA). The third parties may only use the data for certain purposes and must protect it as HIPAA requires.
Thanks to approvals from the Food and Drug Administration (FDA) for applications such as primary disease diagnosis, digital pathology is rapidly becoming the new standard of care. However, this advancement creates challenges that Artificial Intelligence could help solve. Digital pathology enables capturing pathology information, such as whole slide images (WSI), and working with it digitally using a specialized scanner. Acquiring, studying and managing data in this way allows sharing between parties on a computer or mobile device. According to experts, the global digital pathology market was worth $689.2 million in 2018.
The next time you get sick, your care might involve a form of the technology people use to navigate road trips or to pick the right vacuum cleaner online. Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or to help patients. It already detects an eye disease tied to diabetes and does other behind-the-scenes work like helping doctors interpret MRI scans and other imaging tests for some forms of cancer. Now, parts of the health system are starting to use it directly with patients. During some clinic and telemedicine appointments, AI-powered software asks patients initial questions about their symptoms that physicians or nurses normally pose.
Recent scrutiny of artificial intelligence (AI)–based facial recognition software has renewed concerns about the unintended effects of AI on social bias and inequity. Academic and government officials have raised concerns over racial and gender bias in several AI-based technologies, including internet search engines and algorithms to predict risk of criminal behavior. Companies like IBM and Microsoft have made public commitments to "de-bias" their technologies, whereas Amazon mounted a public campaign criticizing such research. As AI applications gain traction in medicine, clinicians and health system leaders have raised similar concerns over automating and propagating existing biases.1 But is AI the problem?
Royal Cornwall Hospital has deployed an artificial intelligence (AI) tool that allows clinicians to view case videos safely and securely. Touch Surgery Enterprise enables automatic processing and viewing of surgical videos for clinicians and their teams without compromising sensitive patient data. These videos can be accessed via mobile app or web shortly after the operation to encourage self-reflection, peer review and improve preoperative preparation. In a usual hospital setting, surgical videos are often left unused due to the vast amounts of sensitive patient data they contain, according to Digital Surgery, the company behind Touch Surgery Enterprise. But the company's hardware, the DS1 computer, plugs into existing recording equipment in a theatre and runs real-time artificial intelligence (AI) algorithms to redact all frames containing potentially identifiable patient and clinician information.