Saykara today announced the release of Kara 2.0, an AI-powered healthcare assistant that further simplifies the documentation process for physicians. Now featuring Ambient Mode, Kara 2.0 is a breakthrough AI-powered voice application for healthcare, allowing physicians and patients to interact as they normally do, all while Saykara listens, transcribes to text, parses text into structured data, and intelligently completes each form in a patient's electronic health record (EHR or chart). Saykara then automatically generates a clinic note including patient history, physical, assessment, plan, orders and referrals. With the release of Ambient Mode, Saykara is the only virtual healthcare assistant that can be used passively'in the room' during physician-patient appointments with no voice commands. Ambient Mode builds on Saykara's versatility and agnosticity, allowing it to better serve up to 18 disparate healthcare specialties, including primary care, pediatrics, internal medicine, orthopedics, urology and more.
The next time you see your physician, consider the times you fill in a paper form. It may seem trivial, but the information could be crucial to making a better diagnosis. Now consider the other forms of healthcare data that permeate your life--and that of your doctor, nurses, and the clinicians working to keep patients thriving. Forms and diagnostic reports are just two examples. The volume of such information is staggering, yet fully utilizing this data is key to reducing healthcare costs, improving patient outcomes, and other healthcare priorities.
A former patient of the University of Chicago Medical Center is suing the institution amid claims it violated patients' privacy rights. The class-action lawsuit claims records containing identifiable patient information were shared as a result of a partnership between Google and the University of Chicago. All three institutions are named as defendants in the suit, which was filed Wednesday in the Northern District of Illinois by Matt Dinerstein, who received treatment at the medical center during two hospital stays in 2015. The collaboration between Google and the University of Chicago was launched in 2017 to study electronic health records and develop new machine-learning techniques to create predictive models that could prevent unplanned hospital readmissions, avoid costly complications and save lives, according to a 2017 news release from the university. The tech giant has similar partnerships with Stanford University and the University of California-San Francisco.
De-identification of clinical records is an extremely important process which enables the use of the wealth of information present in them. There are a lot of techniques available for this but none of the method implementation has evaluated the scalability, which is an important benchmark. We evaluated numerous deep learning techniques such as BiLSTM-CNN, IDCNN, CRF, BiLSTM-CRF, SpaCy, etc. on both the performance and efficiency. We propose that the SpaCy model implementation for scrubbing sensitive PHI data from medical records is both well performing and extremely efficient compared to other published models.
In Star Wars: The Empire Strikes Back, Luke Skywalker is rescued from the frozen wastes of Hoth after a near-fatal encounter, luckily to be returned to a medical facility filled with advanced robotics and futuristic technology that treat his wounds and quickly bring him back to health. The healthcare industry could be headed toward yet another high-tech makeover (even as it continues to adapt to the advent of electronic health records systems and other healthcare IT products) as artificial intelligence (AI) improves. Could AI applications become the new normal across virtually every sector of the healthcare industry? Many experts believe it is inevitable and coming sooner than you might expect. AI could be simply defined as computers and computer software that are capable of intelligent behavior, such as analysis and learning.
Large capacity machine learning models are prone to membership inference attacks in which an adversary aims to infer whether a particular data sample is a member of the target model's training dataset. Such membership inferences can lead to serious privacy violations as machine learning models are often trained using privacy-sensitive data such as medical records and controversial user opinions. Recently defenses against membership inference attacks are developed, in particular, based on differential privacy and adversarial regularization; unfortunately, such defenses highly impact the classification accuracy of the underlying machine learning models. In this work, we present a new defense against membership inference attacks that preserves the utility of the target machine learning models significantly better than prior defenses. Our defense, called distillation for membership privacy (DMP), leverages knowledge distillation, a model compression technique, to train machine learning models with membership privacy. We use different techniques in the DMP to maximize its membership privacy with minor degradation to utility. DMP works effectively against the attackers with either a whitebox or blackbox access to the target model. We evaluate DMP's performance through extensive experiments on different deep neural networks and using various benchmark datasets. We show that DMP provides a significantly better tradeoff between inference resilience and classification performance than state-of-the-art membership inference defenses. For instance, a DMP-trained DenseNet provides a classification accuracy of 65.3\% for a 54.4\% (54.7\%) blackbox (whitebox) membership inference attack accuracy, while an adversarially regularized DenseNet provides a classification accuracy of only 53.7\% for a (much worse) 68.7\% (69.5\%) blackbox (whitebox) membership inference attack accuracy.
Spatio-temporal event data is ubiquitous in various applications, such as social media, crime events, and electronic health records. Spatio-temporal point processes offer a versatile framework for modeling such event data, as it can jointly capture spatial and temporal dependency. A key question is to estimate the generative model for such point processes, which enables the subsequent machine learning tasks. Existing works mainly focus on parametric models for the conditional intensity function, such as the widely used multi-dimensional Hawkes processes. However, parametric models tend to lack flexibility in tackling real data. On the other hand, non-parametric for spatio-temporal point processes tend to be less interpretable. We introduce a novel and flexible semi-parametric spatial-temporal point processes model, by combining spatial statistical models based on heterogeneous Gaussian mixture diffusion kernels, whose parameters are represented using neural networks. We learn the model using a reinforcement learning framework, where the reward function is defined via the maximum mean discrepancy (MMD) of the empirical processes generated by the model and the real data. Experiments based on real data show the superior performance of our method relative to the state-of-the-art.
Effective modeling of electronic health records (EHR) is rapidly becoming an important topic in both academia and industry. A recent study showed that utilizing the graphical structure underlying EHR data (e.g. relationship between diagnoses and treatments) improves the performance of prediction tasks such as heart failure diagnosis prediction. However, EHR data do not always contain complete structure information. Moreover, when it comes to claims data, structure information is completely unavailable to begin with. Under such circumstances, can we still do better than just treating EHR data as a flat-structured bag-of-features? In this paper, we study the possibility of utilizing the implicit structure of EHR by using the Transformer for prediction tasks on EHR data. Specifically, we argue that the Transformer is a suitable model to learn the hidden EHR structure, and propose the Graph Convolutional Transformer, which uses data statistics to guide the structure learning process. Our model empirically demonstrated superior prediction performance to previous approaches on both synthetic data and publicly available EHR data on encounter-based prediction tasks such as graph reconstruction and readmission prediction, indicating that it can serve as an effective general-purpose representation learning algorithm for EHR data.
In medicine, both ethical and monetary costs of incorrect predictions can be significant, and the complexity of the problems often necessitates increasingly complex models. Recent work has shown that changing just the random seed is enough for otherwise well-tuned deep neural networks to vary in their individual predicted probabilities. In light of this, we investigate the role of model uncertainty methods in the medical domain. Using RNN ensembles and various Bayesian RNNs, we show that population-level metrics, such as AUC-PR, AUC-ROC, log-likelihood, and calibration error, do not capture model uncertainty. Meanwhile, the presence of significant variability in patient-specific predictions and optimal decisions motivates the need for capturing model uncertainty. Understanding the uncertainty for individual patients is an area with clear clinical impact, such as determining when a model decision is likely to be brittle. We further show that RNNs with only Bayesian embeddings can be a more efficient way to capture model uncertainty compared to ensembles, and we analyze how model uncertainty is impacted across individual input features and patient subgroups.
"For example, to determine the most appropriate protocol, machine learning not only could draw from information on the examination order but could also potentially mine the electronic medical record, prior examination protocols and examination reports, CT or MRI scanner data, the contrast injection system and contrast agent data, the cumulative or annual radiation dose, and other quantitative data," the authors wrote.