DeepMind Faces: Google's AI department, otherwise known as DeepMined, the Google-owned AI research company, is the subject of a lawsuit. The lawsuit focuses on the company's use of the personal records of a whopping 1.6 million UK National Service patients, including confidential medical records. The #Google #AI department is getting a class-action lawsuit for gaining 1.6 million confidential medical records of #NHS patients. According to PCGamer, DeepMind received the documents to create a health application the company calls Streams. It was supposed to be an AI-based assistant to help healthcare workers and was previously used by the British National Health Service.
A class-action lawsuit has been launched against DeepMind, the Google-owned AI research company, over its use of the personal records of 1.6 million patients from the UK's National Health Service (thanks, AI News). The health data was provided by the Royal Free London NHS Foundation Trust in 2015.DeepMind is known for several achievements, not least kicking everyone's ass at Starcraft 2, but it was given the records in order to create a health app called Streams. This was supposed to be an AI-powered assistant to healthcare professionals and has been used by the UK NHS—but no more. This August it was announced that Streams is being decommissioned, and DeepMind's own 'health' section now returns a server error.The handing-over of patient records to one of the world's biggest technology companies was exposed by New Scientist in 2017, in a report showing that DeepMind had access to far more data than had been publicly announced. The UK Information Commission launched an investigation that ruled the Royal Free hospital hadn't done enough to protect patients' privacy: following which, DeepMind apologised. "Our investigation found a number of shortcomings in the way patient records were shared for this trial," Information Commissioner Elizabeth Denham said at the time. "Patients would not have reasonably expected their information to have been used in this way."The new suit has been launched by lead plaintiff Andrew Prismall, who was a patient at the Royal Free hospital, and includes approximately 1.6 million other affected patients on an 'opt-out' basis—that is, all parties will be included in the action unless they request otherwise."Given the very positive experience of the NHS that I have always had during my various treatments, I was greatly concerned to find that a tech giant had ended up with my confidential medical records," said Prismall in a statement."As a patient having any sort of medical treatment, the last thing you would expect is your private medical records to be in the hands of one of the world’s biggest technology companies. I hope that this case will help achieve a fair outcome and closure for all of the patients whose confidential records were obtained in this instance without their knowledge or consent."
Researchers at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Beth Israel Deaconess Medical Center have teamed up to improve electronic health records (EHRs) with artificial intelligence (AI) machine learning and published their findings in a recent study. The automation of patient health records gives hope to benefits to clinicians, patients, and stakeholders such as increase speed of data transfer, lower costs in maintaining paper records, increase efficiency, improve outcomes by avoiding or reducing clinical errors. However, electronic health records have yet to achieve many of these positive benefits and is a leading cause of burnout and stress among physicians according to the researchers. Clinicians are spending time on using the electronic health records instead of talking with patients. The worldwide electronic health records market was USD 26.8 billion in 2020 with North America having the highest revenue share of 45 percent according to Grand View Research.
Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues. #### Summary points A popular type of clinical research is the development of statistical models that predict disease presence and outcome occurrence in individuals,123 thereby informing clinical diagnosis and prognosis. Such models are referred to here as diagnostic and prognostic prediction models, but they have many other names including risk models, risk scores, and clinical prediction rules. They are typically developed by use of a multivariable regression framework, which provides an equation to estimate an individual’s risk based on values of multiple predictors (such as age and smoking, or biomarkers and genetic information). …
A few groups of VA researchers are using artificial intelligence (AI) to identify Veterans at high risk of hospitalization or death. That can help ensure these Veterans get the best care possible. One potential approach was described in a recent article in the journal PLOS ONE. The idea was to match patients with the right types of care, explains the study's lead author, VA cancer physician and investigator Dr. Ravi Parikh. Dr. Ravi Parikh is a cancer doctor with expertise in informatics and health care delivery.
What makes a smart hospital? Last month, Newsweek published its list of the world's best smart hospitals, which was based on a survey that included recommendations from national and international sources. The survey covered five categories: digital surgery, digital imaging, AI, telehealth and electronic health records (EHRs). But it's important for health care leaders to keep in mind that there's another category to measure, and it's the one that matters most to patients: the patient experience. Hospitals have to consider how all of that technology impacts consumers' lived experience of health care.
A new research collaboration between France and the UK casts doubt on growing industry confidence that synthetic data can resolve the privacy, quality and availability issues (among other issues) that threaten progress in the machine learning sector. Among several key points addressed, the authors assert that synthetic data modeled from real data retains enough of the genuine information as to provide no reliable protection from inference and membership attacks, which seek to deanonymize data and re-associate it with actual people. Furthermore, the individuals most at risk from such attacks, including those with critical medical conditions or high hospital bills (in the case of medical record anonymization) are, through the'outlier' nature of their condition, most likely to be re-identified by these techniques. 'Given access to a synthetic dataset, a strategic adversary can infer, with high confidence, the presence of a target record in the original data.' The paper also notes that differentially private synthetic data, which obscures the signature of individual records, does indeed protect individuals' privacy, but only by significantly crippling the usefulness of the information retrieval systems that use it.
Complication risk profiling is a key challenge in the healthcare domain due to the complex interaction between heterogeneous entities (e.g., visit, disease, medication) in clinical data. With the availability of real-world clinical data such as electronic health records and insurance claims, many deep learning methods are proposed for complication risk profiling. However, these existing methods face two open challenges. First, data heterogeneity relates to those methods leveraging clinical data from a single view only while the data can be considered from multiple views (e.g., sequence of clinical visits, set of clinical features). Second, generalized prediction relates to most of those methods focusing on single-task learning, whereas each complication onset is predicted independently, leading to suboptimal models. We propose a multi-view multi-task network (MuViTaNet) for predicting the onset of multiple complications to tackle these issues. In particular, MuViTaNet complements patient representation by using a multi-view encoder to effectively extract information by considering clinical data as both sequences of clinical visits and sets of clinical features. In addition, it leverages additional information from both related labeled and unlabeled datasets to generate more generalized representations by using a new multi-task learning scheme for making more accurate predictions. The experimental results show that MuViTaNet outperforms existing methods for profiling the development of cardiac complications in breast cancer survivors. Furthermore, thanks to its multi-view multi-task architecture, MuViTaNet also provides an effective mechanism for interpreting its predictions in multiple perspectives, thereby helping clinicians discover the underlying mechanism triggering the onset and for making better clinical treatments in real-world scenarios.
The current state of adoption of well-structured electronic health records and integration of digital methods for storing medical patient data in structured formats can often considered as inferior compared to the use of traditional, unstructured text based patient data documentation. Data mining in the field of medical data analysis often needs to rely solely on processing of unstructured data to retrieve relevant data. In natural language processing (NLP), statistical models have been shown successful in various tasks like part-of-speech tagging, relation extraction (RE) and named entity recognition (NER). In this work, we present GERNERMED, the first open, neural NLP model for NER tasks dedicated to detect medical entity types in German text data. Here, we avoid the conflicting goals of protection of sensitive patient data from training data extraction and the publication of the statistical model weights by training our model on a custom dataset that was translated from publicly available datasets in foreign language by a pretrained neural machine translation model. The sample code and the statistical model is available at: https://github.com/frankkramer-lab/GERNERMED
Electronic health records have been widely adopted with the hope they would save time and improve the quality of patient care. But due to fragmented interfaces and tedious data entry procedures, physicians often spend more time navigating these systems than they do interacting with patients. Researchers at MIT and the Beth Israel Deaconess Medical Center are combining machine learning and human-computer interaction to create a better electronic health record (EHR). They developed MedKnowts, a system that unifies the processes of looking up medical records and documenting patient information into a single, interactive interface. Driven by artificial intelligence, this "smart" EHR automatically displays customized, patient-specific medical records when a clinician needs them.