radiography
BLS-GAN: A Deep Layer Separation Framework for Eliminating Bone Overlap in Conventional Radiographs
Wang, Haolin, Ou, Yafei, Ambalathankandy, Prasoon, Ota, Gen, Dai, Pengyu, Ikebe, Masayuki, Suzuki, Kenji, Kamishima, Tamotsu
Conventional radiography is the widely used imaging technology in diagnosing, monitoring, and prognosticating musculoskeletal (MSK) diseases because of its easy availability, versatility, and cost-effectiveness. In conventional radiographs, bone overlaps are prevalent, and can impede the accurate assessment of bone characteristics by radiologists or algorithms, posing significant challenges to conventional and computer-aided diagnoses. This work initiated the study of a challenging scenario - bone layer separation in conventional radiographs, in which separate overlapped bone regions enable the independent assessment of the bone characteristics of each bone layer and lay the groundwork for MSK disease diagnosis and its automation. This work proposed a Bone Layer Separation GAN (BLS-GAN) framework that can produce high-quality bone layer images with reasonable bone characteristics and texture. This framework introduced a reconstructor based on conventional radiography imaging principles, which achieved efficient reconstruction and mitigates the recurrent calculations and training instability issues caused by soft tissue in the overlapped regions. Additionally, pre-training with synthetic images was implemented to enhance the stability of both the training process and the results. The generated images passed the visual Turing test, and improved performance in downstream tasks. This work affirms the feasibility of extracting bone layer images from conventional radiographs, which holds promise for leveraging bone layer separation technology to facilitate more comprehensive analytical research in MSK diagnosis, monitoring, and prognosis. Code and dataset will be made available.
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
A Unified Model for Longitudinal Multi-Modal Multi-View Prediction with Missingness
Chen, Boqi, Oliva, Junier, Niethammer, Marc
Medical records often consist of different modalities, such as images, text, and tabular information. Integrating all modalities offers a holistic view of a patient's condition, while analyzing them longitudinally provides a better understanding of disease progression. However, real-world longitudinal medical records present challenges: 1) patients may lack some or all of the data for a specific timepoint, and 2) certain modalities or views might be absent for all patients during a particular period. In this work, we introduce a unified model for longitudinal multi-modal multi-view prediction with missingness. Our method allows as many timepoints as desired for input, and aims to leverage all available data, regardless of their availability. We conduct extensive experiments on the knee osteoarthritis dataset from the Osteoarthritis Initiative for pain and Kellgren-Lawrence grade prediction at a future timepoint. We demonstrate the effectiveness of our method by comparing results from our unified model to specific models that use the same modality and view combinations during training and evaluation. We also show the benefit of having extended temporal data and provide post-hoc analysis for a deeper understanding of each modality/view's importance for different tasks.
- North America > United States > North Carolina (0.04)
- North America > Canada > Ontario > Hamilton (0.04)
- Europe > Netherlands > South Holland > Rotterdam (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Health & Medicine > Therapeutic Area > Rheumatology (0.88)
- (3 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Leveraging cough sounds to optimize chest x-ray usage in low-resource settings
Philip, Alexander, Chawla, Sanya, Jover, Lola, Kafentzis, George P., Brew, Joe, Saraf, Vishakh, Vijayan, Shibu, Small, Peter, Chaccour, Carlos
Chest X-ray is a commonly used tool during triage, diagnosis and management of respiratory diseases. In resource-constricted settings, optimizing this resource can lead to valuable cost savings for the health care system and the patients as well as to and improvement in consult time. We used prospectively-collected data from 137 patients referred for chest X-ray at the Christian Medical Center and Hospital (CMCH) in Purnia, Bihar, India. Each patient provided at least five coughs while awaiting radiography. Collected cough sounds were analyzed using acoustic AI methods. Cross-validation was done on temporal and spectral features on the cough sounds of each patient. Features were summarized using standard statistical approaches. Three models were developed, tested and compared in their capacity to predict an abnormal result in the chest X-ray. All three methods yielded models that could discriminate to some extent between normal and abnormal with the logistic regression performing best with an area under the receiver operating characteristic curves ranging from 0.7 to 0.78. Despite limitations and its relatively small sample size, this study shows that AI-enabled algorithms can use cough sounds to predict which individuals presenting for chest radiographic examination will have a normal or abnormal results. These results call for expanding this research given the potential optimization of limited health care resources in low- and middle-income countries.
- Asia > India > Bihar (0.25)
- Europe > Switzerland > Basel-City > Basel (0.05)
- Asia > India > Maharashtra > Mumbai (0.05)
- (5 more...)
Automatic Classification of Symmetry of Hemithoraces in Canine and Feline Radiographs
Tahghighi, Peyman, Norena, Nicole, Ukwatta, Eran, Appleby, Ryan B, Komeili, Amin
Purpose: Thoracic radiographs are commonly used to evaluate patients with confirmed or suspected thoracic pathology. Proper patient positioning is more challenging in canine and feline radiography than in humans due to less patient cooperation and body shape variation. Improper patient positioning during radiograph acquisition has the potential to lead to a misdiagnosis. Asymmetrical hemithoraces are one of the indications of obliquity for which we propose an automatic classification method. Approach: We propose a hemithoraces segmentation method based on Convolutional Neural Networks (CNNs) and active contours. We utilized the U-Net model to segment the ribs and spine and then utilized active contours to find left and right hemithoraces. We then extracted features from the left and right hemithoraces to train an ensemble classifier which includes Support Vector Machine, Gradient Boosting and Multi-Layer Perceptron. Five-fold cross-validation was used, thorax segmentation was evaluated by Intersection over Union (IoU), and symmetry classification was evaluated using Precision, Recall, Area under Curve and F1 score. Results: Classification of symmetry for 900 radiographs reported an F1 score of 82.8% . To test the robustness of the proposed thorax segmentation method to underexposure and overexposure, we synthetically corrupted properly exposed radiographs and evaluated results using IoU. The results showed that the models IoU for underexposure and overexposure dropped by 2.1% and 1.2%, respectively. Conclusions: Our results indicate that the proposed thorax segmentation method is robust to poor exposure radiographs. The proposed thorax segmentation method can be applied to human radiography with minimal changes.
- North America > Canada > Ontario > Wellington County > Guelph (0.04)
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.04)
- North America > United States (0.04)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.54)
Introduction to Artificial Intelligence for Radiographers
AI in healthcare and medical imaging has developed rapidly over the last decade. This course presents the basic elements of Artificial Intelligence (AI) in the context of Radiography. It will offer you some background knowledge on all key contemporary AI topics and how these can affect your professional practice and workflow. This is one of the first AI courses designed specifically for the Radiography workforce, and is integral in understanding and managing future changes in practice as it covers all modalities of Radiography. This course is for recent radiography graduates, clinical practitioners, radiology managers, radiography researchers and educators who wish to further their understanding of the basic principles and applications of AI in Radiography and Medical Imaging. This is the first course of its kind for radiographers in the UK and Europe, and its key takeaway is the ability to understand and manage future changes in radiography practice.
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Self-supervised Learning from 100 Million Medical Images
Building accurate and robust artificial intelligence systems for medical image assessment requires not only the research and design of advanced deep learning models but also the creation of large and curated sets of annotated training examples. Constructing such datasets, however, is often very costly – due to the complex nature of annotation tasks and the high level of expertise required for the interpretation of medical images (e.g., expert radiologists). To counter this limitation, we propose a method for self-supervised learning of rich image features based on contrastive learning and online feature clustering. We propose to use these features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT and MR: 1) Significant increase in accuracy compared to the state-of-the-art (e.g., AUC boost of 3-7 detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); 2) Acceleration of model convergence during training by up to 85 detection of brain metastases in MR scans); 3) Increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field.
X-ray Dissectography Enables Stereotography to Improve Diagnostic Performance
X-ray imaging is the most popular medical imaging technology. While x-ray radiography is rather cost-effective, tissue structures are superimposed along the x-ray paths. On the other hand, computed tomography (CT) reconstructs internal structures but CT increases radiation dose, is complicated and expensive. Here we propose "x-ray dissectography" to extract a target organ/tissue digitally from few radiographic projections for stereographic and tomographic analysis in the deep learning framework. As an exemplary embodiment, we propose a general X-ray dissectography network, a dedicated X-ray stereotography network, and the X-ray imaging systems to implement these functionalities. Our experiments show that x-ray stereography can be achieved of an isolated organ such as the lungs in this case, suggesting the feasibility of transforming conventional radiographic reading to the stereographic examination of the isolated organ, which potentially allows higher sensitivity and specificity, and even tomographic visualization of the target. With further improvements, x-ray dissectography promises to be a new x-ray imaging modality for CT-grade diagnosis at radiation dose and system cost comparable to that of radiographic or tomosynthetic imaging.
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
MEDRADRESEARCH
Our April blog is from Dr. Tracy O'Regan. Tracy is a diagnostic radiographer who works at the Society & College of Radiographers (SCoR) as officer for clinical imaging and research. She sits on the steering committee for UK Research & Innovation (UKRI) Science & Technology Facilities Council (STFC) Cancer Diagnosis Network, she is a member of NHSx AI Imaging Advisory Board, and she provides officer support for a SCoR AI & Emerging Technologies Working Party who are currently consulting on a guidance document with recommendations and priorities for AI for UK professionals. In 2011 Nilsson wrote a book that explored 50 years of the development of Artificial Intelligence (AI) (1). Nilsson described AI winters and a series of false dawns; each progressed the path to our current stage of AI with the promise of machine learning, neural networks and deep learning. Despite that development, in the main, clinical imaging and radiotherapy professionals are still discussing AI as if it is a new fashion or perhaps even the emperor's new clothes.
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment
The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy.
Artificial Intelligence in Radiography - The Robots are coming!
Whether we are excited, fearful or just plain sceptical..... one thing for certain is that Artificial Intelligence in healthcare is here to stay (or on the way, for the sceptics) BUT IT WILL change the way we deliver clinical services. In the world of Radiology, AI has been the main topic of discussion over recent years. The main focus of published research is firmly on how it will impact the role of the radiologist. "AI in Radiography: The Robots are coming" will bring together a panel of specialists from across imaging with speakers from Academia, Radiology, The Society & College of Radiographers & The British Institute of Radiology. All bringing expert opinions, analysis and their position on the future of AI in the profession.AI has the potential to enhance the role of a radiographer greatly, but this is dependent on creating a higher awareness of the possibilities, exploring them in more detail and then becoming the influencers and not the influenced.
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)