Collaborating Authors


AutoDiscern: Rating the Quality of Online Health Information with Hierarchical Encoder Attention-based Neural Networks Machine Learning

Patients increasingly turn to search engines and online content before, or in place of, talking with a health professional. Low quality health information, which is common on the internet, presents risks to the patient in the form of misinformation and a possibly poorer relationship to their physician. To address this, the DISCERN criteria (developed at University of Oxford) are used to evaluate the quality of online health information. However, patients are unlikely to take the time to apply these criteria to the health websites they visit. We built an automated implementation of the DISCERN instrument (Brief version) using machine learning models. We compared the use of a traditional model (Random Forest) with a hierarchical encoder attention-based neural network (HEA) model using two language embeddings based on BERT and BioBERT. The HEA BERT and BioBERT models achieved F1-macro scores averaging 0.75 and 0.74, respectively, on all criteria outperforming the Random Forest model (F1-macro = 0.69). Similarly, HEA BERT and BioBERT scored on average 0.8 and 0.81 (F1-micro) vs. 0.76 for the Random Forest model. Overall, the neural network based models achieved 81% and 86% average accuracy at 100% and 80% coverage, respectively, compared to 94% manual rating accuracy. The attention mechanism implemented in the HEA architectures provided 'model explainability' by identifying reasonable supporting sentences for the documents fulfilling the Brief DISCERN criteria. Our research suggests that it is feasible to automate online health information quality assessment, which is an important step towards empowering patients to become informed partners in the healthcare process.

Reinforcement Learning in Healthcare: A Survey Artificial Intelligence

As a subfield of machine learning, \emph{reinforcement learning} (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey will discuss the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in a variety of healthcare domains, ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.

Artificial intelligence and the Pharma industry. What should CIOs be doing?


Artificial intelligence has the potential to transform healthcare and the pharma industry; making health services more predictive, more effective and more equitable. But can AI's potential be realised, and what can chief information officers and clinical chief information officers do to get the most out of the new technology? The Healthcare Information and Management Systems Society (HIMSS) Executive Leadership summit last month attended by NHS CIOs and CCIOs, leaders from NHS England and NHS Digital and other health IT innovators, expressed the potential AI-driven data analytics has in supporting the NHS's Five Year Forward View, narrowing gaps in health provision. The summit discussed how AI could address the health and wellbeing gap by predicting which individuals most risk illness, allowing the NHS to target treatments accordingly, with AI giving health professionals and patients bespoke diagnostics and treatments. It was also noted that AI could help address the efficiency and funding gap by automating tasks, triaging patients to the most appropriate services and allowing them to self-care.

AI passes a stiff test at London's Moorfields Eye Hospital


England's Grand National run at Aintree is gruelling. It has 30 fences, two with open ditches, in a distance of 2.25 miles that's completed twice. AI has just moved up the field in the eHealth equivalent. An AI project at London's Moorfields Eye Hospital with Google's DeepMind has accurately diagnosed eye conditions from scans. As ophthalmologists' workloads and their complexities increase, diagnostic imaging is expanding faster than specialists can interpret the results.

Building health AIs should be UK ambition, says strategy review


A wide-ranging, UK government-commissioned industrial strategy review of the life sciences sector, conducted by Oxford University's Sir John Bell, has underlined the value locked up in publicly funded data held by the country's National Health Service -- and called for a new regulatory framework to be established in order to "capture for the UK the value in algorithms generated using NHS data". The NHS is a free-at-the-point of use national health service covering some 65 million users -- which gives you an idea of the unique depth and granularity of the patient data it holds. And how much potential value could therefore be created for the nation by utilizing patient data-sets to develop machine learning algorithms for medical diagnosis and tracking. "AI is likely to be used widely in healthcare and it should be the ambition for the UK to develop and test integrated AI systems that provide real-time data better than human monitoring and prediction of a wide range of patient outcomes in conditions such as mental health, cancer and inflammatory disease," writes Bell in the report. His recommendation for the government and the NHS to be pro-active about creating and capturing AI-enabled value off of valuable, taxpayer-funded health data-sets comes hard on the heels of the conclusion of a lengthy investigation by the UK's data protection watchdog, the ICO, into a controversial 2015 data-sharing arrangement between Google-DeepMind and a London-based NHS Trust, the Royal Free Hospitals Trust, to co-develop a clinical task management app.

DeepMind's first deal with the NHS has been torn apart in a new academic study


A data-sharing deal between Google DeepMind and the Royal Free London NHS Foundation Trust was riddled with "inexcusable" mistakes, according to an academic paper published on Thursday. The "Google DeepMind and healthcare in an age of algorithms" paper -- coauthored by Cambridge University's Julia Powles and The Economist's Hal Hodson -- questions why DeepMind was given permission to process millions of NHS patient records so easily and without patient approval. "There remain many ongoing issues and it was important to document how the deal was set up, how it played out in public, and to try to caution against another deal from happening in this way in the future," Powles told Business Insider in Berlin the day before the paper was published. DeepMind and Royal Free say that the study "completely misrepresents the reality of how the NHS uses technology to process data" and that it contains "significant mistakes." Powles and Hodson said the accusations of misrepresentation and factual inaccuracy were unsubstantiated, and invited the parties to respond on the record in an open forum.

Google's DeepMind AI To Use 1 Million NHS Eye Scans To Spot Diseases Earlier - Slashdot


Google DeepMind has announced its second collaboration with the NHS, as part of which it will work with Moorfields Eye Hospital in east London to build a machine learning system which will eventually be able to recognise sight-threatening conditions from just a digital scan of the eye. The five-year research project will draw on one million anonymous eye scans which are held on Moorfields' patient database, reports Ars Technica, with the aim to speed up the complex and time-consuming process of analysing eye scans. From the report:The hope is that this will allow diagnoses of common causes of sight loss, like diabetic retinopathy and age-related macular degeneration, to be spotted more rapidly and hence be treated more effectively. For example, Google says that up to 98 percent of sight loss resulting from diabetes can be prevented by early detection and treatment. Two million people are already living with sight loss in the UK, of whom around 360,000 are registered as blind or partially-sighted.

Google's DeepMind AI Engine to Study Eye Disease Digital Trends


DeepMind, the London-based artificial intelligence lab acquired by Google in 2014, has accomplished more than a few spectacular stunts of machine learning. Its neural networks bested a human champion at the notoriously tough game of Go, inculcated the basic rules of soccer on a digital ant-like creature, and teased out winning strategies for more than 49 Atari 2600 games. But now, the outfit's robots are being tasked with a more humanistic pursuit: eye disease research. On Tuesday, DeepMind announced a long-term project that will see the company's machine-learning algorithms parse "millions" of eye scans to tease out early warning signs that human doctors might otherwise miss. The new project, which is based out of the U.K.'s Moorfields Eye Hospital in east London, is the fruit of DeepMind's ongoing partnership -- dubbed DeepMind Health -- with the country's National Health Service.

Google DeepMind pairs with NHS to use machine learning to fight blindness


Google DeepMind has announced its second collaboration with the NHS, working with Moorfields Eye Hospital in east London to build a machine learning system which will eventually be able to recognise sight-threatening conditions from just a digital scan of the eye. The collaboration is the second between the NHS and DeepMind, which is the artificial intelligence research arm of Google, but Deepmind's co-founder, Mustafa Suleyman, says this is the first time the company is embarking purely on medical research. An earlier, ongoing, collaboration, with the Royal Free hospital in north London, is focused on direct patient care, using a smartphone app called Streams to monitor kidney function of patients. The Moorfields collaboration is also the first time DeepMind has used machine learning in a healthcare project. At the heart of the research is the sharing of a million anonymous eye scans, which the DeepMind researchers will use to train an algorithm to better spot the early signs of eye conditions such as wet age-related macular degeneration and diabetic retinopathy.

How AI will transform the future of healthcare - Risk Minds Live


Technological advances and artificial intelligence (AI) are going to totally transform the way healthcare is delivered over the next five to 10 years. This is the view of Tony Young, National Clinical Director for Innovation at NHS England. But he warns that with the advent of life-changing technologies, we must not lose sight of what it means to be human. As with the arrival of the printing press 500 years ago which gave everyone access to the written word, medicine today is having its own "Gutenberg moment". Technology, such as smartphones and wearables, is giving patients access to medical knowledge and empowering them to take charge of their health and well-being.