Artificial Intelligence (AI) can provide humans great relief from numerous repetitive tasks, with automation increasing productivity. Furthermore, AI powered machines and devices are fast and efficient: learning, predicting, and deciding with superhuman accuracy. Concurrently, the use cases for augmentative AI are expanding, with numerous industries and organizations looking to tap into the potential. This is clearly the situation in two key Healthcare and Life Sciences (HCLS) areas: neuroscience and radiology. In both areas, radiologists, MRI technicians and physicians must spend hours sifting through images, searching for markers and anomalies that may spur a disease diagnosis.
This doctoral project will develop the latest computer vision deep learning techniques to better predict outcomes of complex respiratory disease. Through image analysis of chest X-rays and CT scans, we could provide clinicians with more reliable and more objective evidence of specific disease types, and provide greater justification for what are often challenging and high-risk treatments. We currently lack adequate tools for diagnosing complex respiratory infection. Using existing methods, it can be difficult to know whether newly-identified bacteria have caused the severe lung disease, or whether they simply reflect the severe disease, acting as so-called'by-standers' having thrived in this environment. This is a problem, as (i) some infections such as those caused by Pseudomonas and Mycobacteria may require intravenous and potentially long and toxic therapy; and (ii) antibiotic usage when it is not required for treatment drives increasing drug-resistance and this remains a global emergency.
With the development of computer-aided diagnosis (CAD) and image scanning technology, Whole-slide Image (WSI) scanners are widely used in the field of pathological diagnosis. Therefore, WSI analysis has become the key to modern digital pathology. Since 2004, WSI has been used more and more in CAD. Since machine vision methods are usually based on semi-automatic or fully automatic computers, they are highly efficient and labor-saving. The combination of WSI and CAD technologies for segmentation, classification, and detection helps histopathologists obtain more stable and quantitative analysis results, save labor costs and improve diagnosis objectivity. This paper reviews the methods of WSI analysis based on machine learning. Firstly, the development status of WSI and CAD methods are introduced. Secondly, we discuss publicly available WSI datasets and evaluation metrics for segmentation, classification, and detection tasks. Then, the latest development of machine learning in WSI segmentation, classification, and detection are reviewed continuously. Finally, the existing methods are studied, the applicabilities of the analysis methods are analyzed, and the application prospects of the analysis methods in this field are forecasted.
Joint damage in Rheumatoid Arthritis (RA) is assessed by manually inspecting and grading radiographs of hands and feet. This is a tedious task which requires trained experts whose subjective assessment leads to low inter-rater agreement. An algorithm which can automatically predict the joint level damage in hands and feet can help optimize this process, which will eventually aid the doctors in better patient care and research. In this paper, we propose a two-staged approach which amalgamates object detection and convolution neural networks with attention which can efficiently and accurately predict the overall and joint level narrowing and erosion from patients radiographs. This approach has been evaluated on hands and feet radiographs of patients suffering from RA and has achieved a weighted root mean squared error (RMSE) of 1.358 and 1.404 in predicting joint level narrowing and erosion Sharp van der Heijde (SvH) scores which is 31% and 19% improvement with respect to the baseline SvH scores, respectively. The proposed approach achieved a weighted absolute error of 1.456 in predicting the overall damage in hands and feet radiographs for the patients which is a 79% improvement as compared to the baseline. Our method also provides an inherent capability to provide explanations for model predictions using attention weights, which is essential given the black box nature of deep learning models. The proposed approach was developed during the RA2 Dream Challenge hosted by Dream Challenges and secured 4th and 8th position in predicting overall and joint level narrowing and erosion SvH scores from radiographs.
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms can not manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
Quantitative metrics in lung computed tomography (CT) images have been widely used, often without a clear connection with physiology. This work proposes a patient-independent model for the estimation of well-aerated volume of lungs in CT images (WAVE). A Gaussian fit, with mean (Mu.f) and width (Sigma.f) values, was applied to the lower CT histogram data points of the lung to provide the estimation of the well-aerated lung volume (WAVE.f). Independence from CT reconstruction parameters and respiratory cycle was analysed using healthy lung CT images and 4DCT acquisitions. The Gaussian metrics and first order radiomic features calculated for a third cohort of COVID-19 patients were compared with those relative to healthy lungs. Each lung was further segmented in 24 subregions and a new biomarker derived from Gaussian fit parameter Mu.f was proposed to represent the local density changes. WAVE.f resulted independent from the respiratory motion in 80% of the cases. Differences of 1%, 2% and up to 14% resulted comparing a moderate iterative strength and FBP algorithm, 1 and 3 mm of slice thickness and different reconstruction kernel. Healthy subjects were significantly different from COVID-19 patients for all the metrics calculated. Graphical representation of the local biomarker provides spatial and quantitative information in a single 2D picture. Unlike other metrics based on fixed histogram thresholds, this model is able to consider the inter-and intra-subject variability. In addition, it defines a local biomarker to quantify the severity of the disease, independently of the observer.
Lung cancer detection and radiologist performance can get a boost from an artificial intelligence (AI) algorithm that pinpoints previously un-detected cancers on chest X-rays. In a study published in the Dec. 10 Radiology: Cardiothoracic Imaging, investigators from Seoul National University Hospital outlined how a commercially available deep-learning algorithm outperformed four thoracic radiologists on both first and second reads. Overall, said the team led by Ju Gang Nam, M.D., the algorithm offered both higher sensitivity and higher specificity, and it improved providers' performance as a seconder reader, leading to significantly improved detection rates. But, to date, the team said, adoption of computer-aided detection with chest X-ray has been slow because many providers still have lingering questions about whether it can perform well enough in clinical practice. To answer that question, Nam's team used an enriched dataset of 50 normal chest X-rays, as well as 168 posteroanterior chest X-rays with lung cancers.
Each year, over 2.5 million people, most of them in developed countries, die from pneumonia . Since many studies have proved pneumonia is successfully treatable when timely and correctly diagnosed, many of diagnosis aids have been developed, with AI-based methods achieving high accuracies . However, currently, the usage of AI in pneumonia detection is limited, in particular, due to challenges in generalizing a locally achieved result. In this report, we propose a roadmap for creating and integrating a system that attempts to solve this challenge. We also address various technical, legal, ethical, and logistical issues, with a blueprint of possible solutions.
Respiratory diseases kill million of people each year. Diagnosis of these pathologies is a manual, time-consuming process that has inter and intra-observer variability, delaying diagnosis and treatment. The recent COVID-19 pandemic has demonstrated the need of developing systems to automatize the diagnosis of pneumonia, whilst Convolutional Neural Network (CNNs) have proved to be an excellent option for the automatic classification of medical images. However, given the need of providing a confidence classification in this context it is crucial to quantify the reliability of the model's predictions. In this work, we propose a multi-level ensemble classification system based on a Bayesian Deep Learning approach in order to maximize performance while quantifying the uncertainty of each classification decision. This tool combines the information extracted from different architectures by weighting their results according to the uncertainty of their predictions. Performance of the Bayesian network is evaluated in a real scenario where simultaneously differentiating between four different pathologies: control vs bacterial pneumonia vs viral pneumonia vs COVID-19 pneumonia. A three-level decision tree is employed to divide the 4-class classification into three binary classifications, yielding an accuracy of 98.06% and overcoming the results obtained by recent literature. The reduced preprocessing needed for obtaining this high performance, in addition to the information provided about the reliability of the predictions evidence the applicability of the system to be used as an aid for clinicians.
This article is to set up the framework with a simple model with a detailed walk through of each step. There are tons of improvements that can be made to boost model performance! In the world of healthcare, one of the major issues that medical professionals face is the correct diagnosis of conditions and diseases of patients. Not being able to correctly diagnose a condition is a problem for both the patient and the doctor. The doctor is not benefiting the patient in the appropriate way if the doctor misdiagnoses the patient.