Using data from a statewide health information exchange, researchers have created machine learning algorithms that are able to identify patients who need advanced treatment for depression. According to Regenstrief Institute and Indiana University researchers, identifying cases of depression that require advanced care can be challenging for primary care physicians. However, they contend that their models--which leverage diagnostic, behavioral and demographic data, as well as past visit history from an HIE--can help PCPs predict which patients may be more at risk for adverse events from depression. Researchers created models for the entire patient population at Eskenazi Health, the public safety net healthcare system for Marion County, Indiana, as well as several different high-risk patient populations. "This study demonstrates the ability to automate screening for patients in need of advanced care for depression across an overall patient population or various high-risk patient groups using structured datasets covering acute and chronic conditions, patient demographics, behaviors and past visit history," conclude researchers in a recent article published in the Journal of Medical Internet Research.
Early results in using convolutional neural networks (CNNs) on x-rays to diagnose disease have been promising, but it has not yet been shown that models trained on x-rays from one hospital or one group of hospitals will work equally well at different hospitals. Before these tools are used for computer-aided diagnosis in real-world clinical settings, we must verify their ability to generalize across a variety of hospital systems. A cross-sectional design was used to train and evaluate pneumonia screening CNNs on 158,323 chest x-rays from NIH (n=112,120 from 30,805 patients), Mount Sinai (42,396 from 12,904 patients), and Indiana (n=3,807 from 3,683 patients). In 3 / 5 natural comparisons, performance on chest x-rays from outside hospitals was significantly lower than on held-out x-rays from the original hospital systems. CNNs were able to detect where an x-ray was acquired (hospital system, hospital department) with extremely high accuracy and calibrate predictions accordingly. The performance of CNNs in diagnosing diseases on x-rays may reflect not only their ability to identify disease-specific imaging findings on x-rays, but also their ability to exploit confounding information. Estimates of CNN performance based on test data from hospital systems used for model training may overstate their likely real-world performance.
It's been the bane of any cable TV subscriber. They get frustrated with their provider and want to switch but have few alternatives available. For residents of Los Angeles, Sacramento, Houston and Indianapolis this year, 5G could be change that. These are the four markets Verizon will be testing 5G later this year. The 5G networks have been touted as the next big thing for wireless consumers, a way for them to get faster service, with videos that will open immediately and downloads that will take seconds instead of minutes.
Artificial intelligence (AI) tools trained to detect pneumonia on chest X-rays suffered significant decreases in performance when tested on data from outside health systems, according to a study conducted at the Icahn School of Medicine at Mount and published in a special issue of PLOS Medicine on machine learning and health care. These findings suggest that artificial intelligence in the medical space must be carefully tested for performance across a wide range of populations; otherwise, the deep learning models may not perform as accurately as expected. As interest in the use of computer system frameworks called convolutional neural networks (CNN) to analyze medical imaging and provide a computer-aided diagnosis grows, recent studies have suggested that AI image classification may not generalize to new data as well as commonly portrayed. Researchers at the Icahn School of Medicine at Mount Sinai assessed how AI models identified pneumonia in 158,000 chest X-rays across three medical institutions: the National Institutes of Health; The Mount Sinai Hospital; and Indiana University Hospital. Researchers chose to study the diagnosis of pneumonia on chest X-rays for its common occurrence, clinical significance, and prevalence in the research community.
To support public health reporting, the use of computers and machine learning can better help with access to unstructured clinical data--including in cancer case detection, according to a recent study. Often, the unstructured free text data made available by electronic health records is obtained by means that are "resource intensive, inherently complex and rely on structured clinical data and dictionary-based approaches," according to the authors of the study, published in the Journal of Biomedical Informatics. The researchers, from the Regenstrief Institute and Indiana University-Purdue University in Indianapolis, used about 7,000 pathology reports from the Indiana health information exchange to attempt to detect cancer cases using already available algorithms and open source machine learning tools. "We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not," Shaun Grannis, M.D., interim director of the Regenstrief Center of Biomedical Informatics, said in an announcement. "We have come to the point in time that technology can handle this.