dysplasia
Gaussian Process Diffeomorphic Statistical Shape Modelling Outperforms Angle-Based Methods for Assessment of Hip Dysplasia
Paul, Allen, Grammatopoulos, George, Rambojun, Adwaye, Campbell, Neill D. F., Gill, Harinderjit S., Shardlow, Tony
Dysplasia is a recognised risk factor for osteoarthritis (OA) of the hip, early diagnosis of dysplasia is important to provide opportunities for surgical interventions aimed at reducing the risk of hip OA. We have developed a pipeline for semi-automated classification of dysplasia using volumetric CT scans of patients' hips and a minimal set of clinically annotated landmarks, combining the framework of the Gaussian Process Latent Variable Model with diffeomorphism to create a statistical shape model, which we termed the Gaussian Process Diffeomorphic Statistical Shape Model (GPDSSM). We used 192 CT scans, 100 for model training and 92 for testing. The GPDSSM effectively distinguishes dysplastic samples from controls while also highlighting regions of the underlying surface that show dysplastic variations. As well as improving classification accuracy compared to angle-based methods (AUC 96.2% vs 91.2%), the GPDSSM can save time for clinicians by removing the need to manually measure angles and interpreting 2D scans for possible markers of dysplasia.
- Europe > United Kingdom > North Sea > Southern North Sea (0.05)
- Europe > United Kingdom > England > Somerset > Bath (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (4 more...)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
PathAlign: A vision-language model for whole slide images in histopathology
Ahmed, Faruk, Sellergren, Andrew, Yang, Lin, Xu, Shawn, Babenko, Boris, Ward, Abbi, Olson, Niels, Mohtashamian, Arash, Matias, Yossi, Corrado, Greg S., Duong, Quang, Webster, Dale R., Shetty, Shravya, Golden, Daniel, Liu, Yun, Steiner, David F., Wulczyn, Ellery
Microscopic interpretation of histopathology images underlies many important diagnostic and treatment decisions. While advances in vision-language modeling raise new opportunities for analysis of such images, the gigapixel-scale size of whole slide images (WSIs) introduces unique challenges. Additionally, pathology reports simultaneously highlight key findings from small regions while also aggregating interpretation across multiple slides, often making it difficult to create robust image-text pairs. As such, pathology reports remain a largely untapped source of supervision in computational pathology, with most efforts relying on region-of-interest annotations or self-supervision at the patch-level. In this work, we develop a vision-language model based on the BLIP-2 framework using WSIs paired with curated text from pathology reports. This enables applications utilizing a shared image-text embedding space, such as text or image retrieval for finding cases of interest, as well as integration of the WSI encoder with a frozen large language model (LLM) for WSI-based generative text capabilities such as report generation or AI-in-the-loop interactions. We utilize a de-identified dataset of over 350,000 WSIs and diagnostic text pairs, spanning a wide range of diagnoses, procedure types, and tissue types. We present pathologist evaluation of text generation and text retrieval using WSI embeddings, as well as results for WSI classification and workflow prioritization (slide-level triaging). Model-generated text for WSIs was rated by pathologists as accurate, without clinically significant error or omission, for 78% of WSIs on average. This work demonstrates exciting potential capabilities for language-aligned WSI embeddings.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > Maryland > Howard County > Columbia (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Health & Medicine > Therapeutic Area > Dermatology (0.93)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (0.93)
- Health & Medicine > Diagnostic Medicine > Biopsy (0.74)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (0.68)
Large Language Models for Granularized Barrett's Esophagus Diagnosis Classification
Kefeli, Jenna, Soroush, Ali, Diamond, Courtney J., Zylberberg, Haley M., May, Benjamin, Abrams, Julian A., Weng, Chunhua, Tatonetti, Nicholas
Diagnostic codes for Barrett's esophagus (BE), a precursor to esophageal cancer, lack granularity and precision for many research or clinical use cases. Laborious manual chart review is required to extract key diagnostic phenotypes from BE pathology reports. We developed a generalizable transformer-based method to automate data extraction. Using pathology reports from Columbia University Irving Medical Center with gastroenterologist-annotated targets, we performed binary dysplasia classification as well as granularized multi-class BE-related diagnosis classification. We utilized two clinically pre-trained large language models, with best model performance comparable to a highly tailored rule-based system developed using the same data. Binary dysplasia extraction achieves 0.964 F1-score, while the multi-class model achieves 0.911 F1-score. Our method is generalizable and faster to implement as compared to a tailored rule-based approach.
- Health & Medicine > Diagnostic Medicine (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (0.70)
- Health & Medicine > Therapeutic Area > Oncology (0.49)
- Health & Medicine > Health Care Technology > Medical Record (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
Exploring the In-context Learning Ability of Large Language Model for Biomedical Concept Linking
Wang, Qinyong, Gao, Zhenxiang, Xu, Rong
The biomedical field relies heavily on concept linking in various areas such as literature mining, graph alignment, information retrieval, question-answering, data, and knowledge integration. Although large language models (LLMs) have made significant strides in many natural language processing tasks, their effectiveness in biomedical concept mapping is yet to be fully explored. This research investigates a method that exploits the in-context learning (ICL) capabilities of large models for biomedical concept linking. The proposed approach adopts a two-stage retrieve-and-rank framework. Initially, biomedical concepts are embedded using language models, and then embedding similarity is utilized to retrieve the top candidates. These candidates' contextual information is subsequently incorporated into the prompt and processed by a large language model to re-rank the concepts. This approach achieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7% in chemical entity normalization, exhibiting a competitive performance relative to supervised learning methods. Further, it showed a significant improvement, with an over 20-point absolute increase in F1 score on an oncology matching dataset. Extensive qualitative assessments were conducted, and the benefits and potential shortcomings of using large language models within the biomedical domain were discussed. were discussed.
- North America > United States > Ohio > Cuyahoga County > Cleveland (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Musculoskeletal (1.00)
- Health & Medicine > Therapeutic Area > Genetic Disease (1.00)
- (5 more...)
Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning
Lin, Jingyang, Chen, Junyu, Lyu, Hanjia, Khodak, Igor, Chhabra, Divya, Richardson, Colby L Day, Prelipcean, Irina, Dylag, Andrew M, Luo, Jiebo
Diagnosis of adverse neonatal outcomes is crucial for preterm survival since it enables doctors to provide timely treatment. Machine learning (ML) algorithms have been demonstrated to be effective in predicting adverse neonatal outcomes. However, most previous ML-based methods have only focused on predicting a single outcome, ignoring the potential correlations between different outcomes, and potentially leading to suboptimal results and overfitting issues. In this work, we first analyze the correlations between three adverse neonatal outcomes and then formulate the diagnosis of multiple neonatal outcomes as a multi-task learning (MTL) problem. We then propose an MTL framework to jointly predict multiple adverse neonatal outcomes. In particular, the MTL framework contains shared hidden layers and multiple task-specific branches. Extensive experiments have been conducted using Electronic Health Records (EHRs) from 121 preterm neonates. Empirical results demonstrate the effectiveness of the MTL framework. Furthermore, the feature importance is analyzed for each neonatal outcome, providing insights into model interpretability.
- Asia > Middle East > Iran (0.04)
- Oceania > New Zealand (0.04)
- North America > United States > Washington (0.04)
- (6 more...)
CerviFormer: A Pap-smear based cervical cancer classification method using cross attention and latent transformer
Deo, Bhaswati Singha, Pal, Mayukha, Panigarhi, Prasanta K., Pradhan, Asima
Purpose: Cervical cancer is one of the primary causes of death in women. It should be diagnosed early and treated according to the best medical advice, as with other diseases, to ensure that its effects are as minimal as possible. Pap smear images are one of the most constructive ways for identifying this type of cancer. This study proposes a cross-attention-based Transfomer approach for the reliable classification of cervical cancer in Pap smear images. Methods: In this study, we propose the CerviFormer -- a model that depends on the Transformers and thereby requires minimal architectural assumptions about the size of the input data. The model uses a cross-attention technique to repeatedly consolidate the input data into a compact latent Transformer module, which enables it to manage very large-scale inputs. We evaluated our model on two publicly available Pap smear datasets. Results: For 3-state classification on the Sipakmed data, the model achieved an accuracy of 93.70%. For 2-state classification on the Herlev data, the model achieved an accuracy of 94.57%. Conclusion: Experimental results on two publicly accessible datasets demonstrate that the proposed method achieves competitive results when compared to contemporary approaches. The proposed method brings forth a comprehensive classification model to detect cervical cancer in Pap smear images. This may aid medical professionals in providing better cervical cancer treatment, consequently, enhancing the overall effectiveness of the entire testing process.
- Health & Medicine > Therapeutic Area > Oncology > Cervical Cancer (1.00)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (1.00)
AI could change the way clinicians look at hip preservation
Orthopedic surgeons and biomedical engineers are trained to approach adolescent and young adult hip pain from two different perspectives. Surgeons typically look at conditions such as femoroacetabular impingement (FAI) and hip dysplasia from a clinical point of view. Engineers more often focus on the technology angle. These two perspectives have come together at Boston Children's Hospital, resulting in a tool that could improve diagnosis and clinical planning for hip patients around the globe. VirtualHip is a software platform that uses artificial intelligence (AI) and 3D imaging to support diagnosis and treatment of pediatric hip deformities.
- Health & Medicine > Therapeutic Area (0.60)
- Health & Medicine > Diagnostic Medicine > Imaging (0.32)
- Health & Medicine > Health Care Technology > Medical Record (0.31)
Deep Learning-Based Automatic Diagnosis System for Developmental Dysplasia of the Hip
Li, Yang, Li-Han, Leo Yan, Tian, Hua
As the first-line diagnostic imaging modality, radiography plays an essential role in the early detection of developmental dysplasia of the hip (DDH). Clinically, the diagnosis of DDH relies on manual measurements and subjective evaluation of different anatomical features from pelvic radiographs. This process is inefficient and error-prone and requires years of clinical experience. In this study, we propose a deep learning-based system that automatically detects 14 keypoints from a radiograph, measures three anatomical angles (center-edge, T\"onnis, and Sharp angles), and classifies DDH hips as grades I-IV based on the Crowe criteria. Moreover, a novel data-driven scoring system is proposed to quantitatively integrate the information from the three angles for DDH diagnosis. The proposed keypoint detection model achieved a mean (95% confidence interval [CI]) average precision of 0.807 (0.804-0.810). The mean (95% CI) intraclass correlation coefficients between the center-edge, Tonnis, and Sharp angles measured by the proposed model and the ground-truth were 0.957 (0.952-0.962), 0.947 (0.941-0.953), and 0.953 (0.947-0.960), respectively, which were significantly higher than those of experienced orthopedic surgeons (p<0.0001). In addition, the mean (95% CI) test diagnostic agreement (Cohen's kappa) obtained using the proposed scoring system was 0.84 (0.83-0.85), which was significantly higher than those obtained from diagnostic criteria for individual angle (0.76 [0.75-0.77]) and orthopedists (0.71 [0.63-0.79]). To the best of our knowledge, this is the first study for objective DDH diagnosis by leveraging deep learning keypoint detection and integrating different anatomical measurements, which can provide reliable and explainable support for clinical decision-making.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Orthopedics/Orthopedic Surgery (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin Cancer
Thomas, Simon M., Lefevre, James G., Baxter, Glenn, Hamilton, Nicholas A.
Pathologists have a rich vocabulary with which they can describe all the nuances of cellular morphology. In their world, there is a natural pairing of images and words. Recent advances demonstrate that machine learning models can now be trained to learn high-quality image features and represent them as discrete units of information. This enables natural language, which is also discrete, to be jointly modelled alongside the imaging, resulting in a description of the contents of the imaging. Here we present experiments in applying discrete modelling techniques to the problem domain of non-melanoma skin cancer, specifically, histological images of Intraepidermal Carcinoma (IEC). Implementing a VQ-GAN model to reconstruct high-resolution (256x256) images of IEC images, we trained a sequence-to-sequence transformer to generate natural language descriptions using pathologist terminology. Combined with the idea of interactive concept vectors available by using continuous generative methods, we demonstrate an additional angle of interpretability. The result is a promising means of working towards highly expressive machine learning systems which are not only useful as predictive/classification tools, but also means to further our scientific understanding of disease.
- Oceania > Australia > Queensland (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > California > Sonoma County > Santa Rosa (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Health & Medicine > Therapeutic Area > Oncology > Skin Cancer (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
Deep learning for automatic diagnosis of gastric dysplasia using whole-slide histopathology images in endoscopic specimens - PubMed
Background: Distinguishing gastric epithelial regeneration change from dysplasia and histopathological diagnosis of dysplasia is subject to interobserver disagreement in endoscopic specimens. In this study, we developed a method to distinguish gastric epithelial regeneration change from dysplasia and further subclassify dysplasia. Methods: 897 whole slide images (WSIs) of endoscopic specimens from two hospitals were divided into training, internal validation, and external validation cohorts. We developed a deep learning (DL) with DA (DLDA) model to classify gastric dysplasia and epithelial regeneration change into three categories: negative for dysplasia (NFD), low-grade dysplasia (LGD), and high-grade dysplasia (HGD)/intramucosal invasion neoplasia (IMN). The diagnosis based on the DLDA model was compared to 12 pathologists using 100 gastric biopsy cases.