Radiology extenders who read chest X-rays save attending radiologists more time during the day than radiology residents do, potentially streamlining workflow and alleviating provider burnout. At least that has been the experience for researchers at the University of Pennsylvania. Radiologists in their department read more cases per hour when the drafts came from radiology extenders than from residents, resulting in nearly an hour – 51 minutes – of provider time saved each day. The authors shared their experience on Oct. 13 in the Journal of the American College of Radiology. "Interpreting these radiographs entails a disproportionate amount of work (eg., retrieving patient history, completing standard dictation templates, and ensuring proper communication of important findings before finalization of reports). Given low reimbursement rates for these studies, economic necessities push radiologists to provide faster interpretations, contributing to burnout," said the team led by Arijitt Borthakur, MBA, Ph.D., senior research investigator in the Perelman School of Medicine radiology department.
In research published today in Patterns, a team of engineers led by Wang demonstrated how a deep learning algorithm can be applied to a conventional computerized tomography (CT) scan in order to produce images that would typically require a higher level of imaging technology known as dual-energy CT. Wenxiang Cong, a research scientist at Rensselaer, is first author on this paper. Wang and Cong were also joined by coauthors from Shanghai First-Imaging Tech, and researchers from GE Research. "We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis," said Wang, who is also the director of the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer. Conventional CT scans produce images that show the shape of tissues within the body, but they don't give doctors sufficient information about the composition of those tissues.
Teledyne DALSA, a Teledyne Technologies company and global leader in digital X-ray image sensing technology, is pleased to announce its presence at the CMEF 2020 Technical Exhibition, October 19-22, Shanghai, PR of China, in booth #4.1C27. Teledyne DALSA's full portfolio of groundbreaking CMOS imaging technology delivers superior resolution in real-time, with switchable saturation dose, high dynamic range, and calibration stability and unsurpassed low dose signal-to-noise performance. The innovative Xineos CMOS X-ray detectors deliver superior quality and performance, and provide the best digital imaging solutions for mobile C-arm surgical systems, diagnostic 2D and 3D mammography, CBCT and other demanding X-ray imaging applications. Teledyne DALSA will showcase its proprietary software for generic tomosynthesis reconstruction, which supports various imaging geometries and scanning approaches. The new software technology enables X-ray tomosynthesis imaging modalities that speed the integration of X-ray detectors and deliver new opportunities for system-level innovation.
Artificial intelligence (AI) algorithms are designed to interpret mammograms independently, which might be obtaining ground when it comes to radiologist conference, although the public-at-large does not share this sentiment. Roughly 80% of women say they oppose letting artificial intelligence interpret their studies without a radiologist being directly associated somehow. A study published on October 12 by the Journal of the American College of Radiology, a team of investigators from the University of Groningen, Netherlands reveals that a majority of women (78%) are not comfortable with AI alone analyse their mammograms for signs of breast cancer. Yfke Ongena, PhD, Assistant Professor with the University's Centre of Language and Cognition, says, "Despite recent breakthroughs in the diagnostic performance of AI algorithms for the interpretation of screening mammograms, the common people currently do not support entirely independent use of such systems without involving a radiologist." The response that shows AI can outperform radiologists is surprising.
"A Google Maps for surgeons" is how Perimeter Medical Imaging AI Inc. (TSXV: PINK) President and CFO Jeremy Sobotta described the AI software currently being developed by the company to complement its FDA-cleared medical imaging system at a recent investment conference. Perimeter is a medical technology company working to transform cancer surgery by creating ultra-high-resolution, real-time, advanced imaging tools to address unmet medical needs. The imaging tools have already been developed and are approved in ophthalmology and cardiology (optical coherence tomography or OCT). Perimeter is using this imaging technology (OTIS or Optical Tissue Imaging Console) to assess the tissues surrounding the known cancerous target area to determine whether more tissue should be removed during the ongoing surgery. The imaging technology has the ability to rapidly image large and complex surfaces.
Artificial intelligence (AI) algorithms designed to independently interpret mammograms might be gaining ground when it comes to radiologist confidence, but the public-at-large does not share this sentiment. In fact, nearly 80 percent of women say they oppose letting AI interpret their studies without a radiologist being directly involved somehow. In a study published Oct. 12 in the Journal of the American College of Radiology, a team of investigators from the University of Groningen, in The Netherlands, revealed that, when asked whether they were comfortable with having AI alone analyze their mammograms for signs of breast cancer, 78 percent of women surveyed said no. "Despite recent breakthroughs in the diagnostic performance of AI algorithms for the interpretation of screening mammograms, the general population currently does not support a fully independent use of such systems without involving a radiologist," said the team led by Yfke Ongena, Ph.D. assistant professor with the university's Center of Language and Cognition. Related Content: Artificial Intelligence & Mammography: Where It's Been & Where It's Going Given those advancements and study results that show AI can outperform radiologists, this response is surprising, they said. But, it shows that radiology as an industry should pump the brakes on implementing any AI algorithms as the sole interpreter of mammograms, as well as actively work to educate women about the capabilities and efficacy of these algorithms.
Improving radiologist efficiency and preventing burnout is a primary goal for healthcare providers. A nationwide study published in Mayo Clinic Proceedings in 2015 showed radiologist burnout percentage at a concerning 61% . In additon, the report concludes that "burnout and satisfaction with work-life balance in US physicians worsened from 2011 to 2014. More than half of US physicians are now experiencing professional burnout." As technologists, we're looking for ways to put new and innovative solutions in the hands of physicians to make them more efficient, reduce burnout, and improve care quality.
"The first time I saw an obstetrics ultrasound exam, I thought it was a little bit like magic," said Dr. Susanne Johnson, Associate Specialist in Gynecology at Princess Anne Hospital in the United Kingdom. "I thought to myself, 'now that's something I want to do.'" Now as a gynecologist who specializes in helping diagnose some of the most difficult gynecological diseases and teaching others her skills, Dr. Johnson is using advanced ultrasound technology that has a new bit of'magic' built-in: automated tools and artificial intelligence. "Every patient I have is a bit of a puzzle and my job is to try and develop a hypothesis of what's the problem. Then, I can use ultrasound to try and prove it and point the patient in the right direction to get the care they need," said Dr. Johnson.
This article shows how to train a convolutional neural network to reduce noise in CT images, although the principles apply to medical and nonmedical images; authors also explore mathematical and visually weighted loss functions to adjust the appearance. In this article, authors show how to train a convolutional neural network to reduce noise on medical images, especially low-dose CT images from the recent American Association of Physicists in Medicine low-dose challenge dataset. Human visual feature weighting can be used as a part of the loss term to improve the visual appearance of the filtered images. Medical imaging is driven to produce the best possible images while reducing the radiation dose or acquisition time. Normally, this trade-off is dealt with by using the best possible detection systems and experimenting with different acquisition techniques. Recently, deep learning methods have been applied to images acquired with low dose (or less acquisition time in the case of MRI) to produce images that appear similar to full-dose images.
GE Healthcare (NYSE:GE) announced that it received FDA 510(k) clearance for its Ultra Edition package on Vivid cardivascular ultrasound systems. The Ultra Edition package on Vivid cardiovascular ultrasound systems includes new features that are based on artificial intelligence (AI) designed to enable clinicians to acquire faster, more repeatable exams on a consistent basis, according to a news release. Vivid Ultra Edition reduces exam time through up to 80% fewer clicks, 99% accuracy and less inter-operator variability, the company said. The AI features within the Vivid Ultra Edition includes automatic detection of the appropriate measurement of spectral Doppler images and relevant points in the image for deriving measurements of the left ventricle, along with view recognition for which standard 2D scan plane is acquired. Other features include HD color displays, FlexiLight for improving visualizations and a 4D TTE pediatric probe for 2D and 4D imaging in pediatric patients ranging from neonates to teenagers.