adenoma
SurgBox: Agent-Driven Operating Room Sandbox with Surgery Copilot
Wu, Jinlin, Liang, Xusheng, Bai, Xuexue, Chen, Zhen
Surgical interventions, particularly in neurology, represent complex and high-stakes scenarios that impose substantial cognitive burdens on surgical teams. Although deliberate education and practice can enhance cognitive capabilities, surgical training opportunities remain limited due to patient safety concerns. To address these cognitive challenges in surgical training and operation, we propose SurgBox, an agent-driven sandbox framework to systematically enhance the cognitive capabilities of surgeons in immersive surgical simulations. Specifically, our SurgBox leverages large language models (LLMs) with tailored Retrieval-Augmented Generation (RAG) to authentically replicate various surgical roles, enabling realistic training environments for deliberate practice. In particular, we devise Surgery Copilot, an AI-driven assistant to actively coordinate the surgical information stream and support clinical decision-making, thereby diminishing the cognitive workload of surgical teams during surgery. By incorporating a novel Long-Short Memory mechanism, our Surgery Copilot can effectively balance immediate procedural assistance with comprehensive surgical knowledge. Extensive experiments using real neurosurgical procedure records validate our SurgBox framework in both enhancing surgical cognitive capabilities and supporting clinical decision-making. By providing an integrated solution for training and operational support to address cognitive challenges, our SurgBox framework advances surgical education and practice, potentially transforming surgical outcomes and healthcare quality. The code is available at https://github.com/franciszchen/SurgBox.
- North America > United States > Oklahoma > Payne County > Cushing (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Asia > China > Hong Kong (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Surgery (1.00)
Improving Colorectal Cancer Screening and Risk Assessment through Predictive Modeling on Medical Images and Records
Jiang, Shuai, Robinson, Christina, Anderson, Joseph, Hisey, William, Butterly, Lynn, Suriawinata, Arief, Hassanpour, Saeed
Background and aims: Colonoscopy screening is an effective method to find and remove colon polyps before they can develop into colorectal cancer (CRC). Current follow-up recommendations, as outlined by the U.S. Multi-Society Task Force for individuals found to have polyps, primarily rely on histopathological characteristics, neglecting other significant CRC risk factors. Moreover, the considerable variability in colorectal polyp characterization among pathologists poses challenges in effective colonoscopy follow-up or surveillance. The evolution of digital pathology and recent advancements in deep learning provide a unique opportunity to investigate the added benefits of including the additional medical record information and automatic processing of pathology slides using computer vision techniques in the calculation of future CRC risk. Methods: Leveraging the New Hampshire Colonoscopy Registry's extensive dataset, many with longitudinal colonoscopy follow-up information, we adapted our recently developed transformerbased model for histopathology image analysis in 5-year CRC risk prediction. Additionally, we investigated various multimodal fusion techniques, combining medical record information with deep learning derived risk estimates. Results: Our findings reveal that training a transformer model to predict intermediate clinical variables contributes to enhancing 5-year CRC risk prediction performance, with an AUC of 0.630 comparing to direct prediction (AUC = 0.615, p = 0.013). Furthermore, the fusion of imaging and nonimaging features, while not requiring manual inspection of microscopy images, demonstrates improved predictive capabilities (AUC = 0.674) for 5-year CRC risk comparing to variables extracted from colonoscopy procedure and microscopy findings (AUC = 0.655, p = 0.001). Conclusion: This study signifies the potential of integrating diverse data sources and advanced computational techniques in transforming the accuracy and effectiveness of future CRC risk assessments.
- Asia > Middle East > Lebanon (0.05)
- North America > United States > New Hampshire > Grafton County > Hanover (0.04)
- North America > United States > Vermont (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Colorectal Cancer (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
AI-based Anomaly Detection for Clinical-Grade Histopathological Diagnostics
Dippel, Jonas, Prenißl, Niklas, Hense, Julius, Liznerski, Philipp, Winterhoff, Tobias, Schallenberg, Simon, Kloft, Marius, Buchstab, Oliver, Horst, David, Alber, Maximilian, Ruff, Lukas, Müller, Klaus-Robert, Klauschen, Frederick
While previous studies have demonstrated the potential of AI to diagnose diseases in imaging data, clinical implementation is still lagging behind. This is partly because AI models require training with large numbers of examples only available for common diseases. In clinical reality, however, only few diseases are common, whereas the majority of diseases are less frequent (long-tail distribution). Current AI models overlook or misclassify these diseases. We propose a deep anomaly detection approach that only requires training data from common diseases to detect also all less frequent diseases. We collected two large real-world datasets of gastrointestinal biopsies, which are prototypical of the problem. Herein, the ten most common findings account for approximately 90% of cases, whereas the remaining 10% contained 56 disease entities, including many cancers. 17 million histological images from 5,423 cases were used for training and evaluation. Without any specific training for the diseases, our best-performing model reliably detected a broad spectrum of infrequent ("anomalous") pathologies with 95.0% (stomach) and 91.0% (colon) AUROC and generalized across scanners and hospitals. By design, the proposed anomaly detection can be expected to detect any pathological alteration in the diagnostic tail of gastrointestinal biopsies, including rare primary or metastatic cancers. This study establishes the first effective clinical application of AI-based anomaly detection in histopathology that can flag anomalous cases, facilitate case prioritization, reduce missed diagnoses and enhance the general safety of AI models, thereby driving AI adoption and automation in routine diagnostics and beyond.
- Europe > Germany > Berlin (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- South America > Peru > Lima Department > Lima Province > Lima (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (0.95)
- Health & Medicine > Diagnostic Medicine > Imaging (0.93)
AI Promising for Detecting Adenomas in Patients With Lynch Syndrome - Physician's Weekly
THURSDAY, Jan. 5, 2023 (HealthDay News) -- For patients with Lynch syndrome (LS), artificial intelligence (AI)-assisted colonoscopy is promising for detecting adenomas, especially flat adenomas, according to a study published online Dec. 26 in the United European Gastroenterology Journal. Robert Hüneburg, M.D., from the National Center for Hereditary Tumor Syndromes at University Hospital Bonn in Germany, and colleagues examined the diagnostic performance of AI-assisted colonoscopy compared with high-definition white-light endoscopy (HD-WLE) in adult patients with LS, with a pathogenic germline variant (MLH1, MHS2, MHS6) and at least one previous colonoscopy (interval, 10 to 36 months). A total of 96 patients were included in the analysis. The researchers found that adenomas were detected in 12 of 46 and 18 of 50 patients in the HD-WLE and AI arms, respectively (26.1 versus 36.0 percent). Detection of flat adenomas (Paris classification 0 to IIb) was increased significantly with use of AI-assisted colonoscopy (numbers of detected flat adenomas: 17 of 30 versus four of 20).
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
Improving Precancerous Case Characterization via Transformer-based Ensemble Learning
Zhong, Yizhen, Xiao, Jiajie, Vetterli, Thomas, Matin, Mahan, Loo, Ellen, Lin, Jimmy, Bourgon, Richard, Shapira, Ofer
The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > California > San Mateo County > South San Francisco (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.46)
Medtronic's GI Genius in study for detecting adenomas during colonoscopy
To read the full story, subscribe or sign in. A 2,000-patient U.K. study has been set up to generate real-world evidence of the value of using AI technology to detect bowel polyps (adenomas) during colonoscopy procedures. The aim is to show whether Medtronic plc's GI Genius system improves the detection of polyps when deployed in both expert centers and non-specialist units, in the routine diagnostic screening of patients referred from primary care or through the national fecal immunochemical testing program.
- Health & Medicine > Therapeutic Area > Oncology > Colorectal Cancer (0.80)
- Health & Medicine > Therapeutic Area > Gastroenterology (0.80)
Dysplasia grading of colorectal polyps through CNN analysis of WSI
Perlo, Daniele, Tartaglione, Enzo, Bertero, Luca, Cassoni, Paola, Grangetto, Marco
Colorectal cancer is a leading cause of cancer death for both men and women. For this reason, histo-pathological characterization of colorectal polyps is the major instrument for the pathologist in order to infer the actual risk for cancer and to guide further follow-up. Colorectal polyps diagnosis includes the evaluation of the polyp type, and more importantly, the grade of dysplasia. This latter evaluation represents a critical step for the clinical follow-up. The proposed deep learningbased classification pipeline is based on state-of-the-art convolutional neural network, trained using proper countermeasures to tackle WSI high resolution and very imbalanced dataset. The experimental results show that one can successfully classify adenomas dysplasia grade with 70% accuracy, which is in line with the pathologists' concordance.
UniToPatho, a labeled histopathological dataset for colorectal polyps classification and adenoma dysplasia grading
Barbano, Carlo Alberto, Perlo, Daniele, Tartaglione, Enzo, Fiandrotti, Attilio, Bertero, Luca, Cassoni, Paola, Grangetto, Marco
Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin (H&E) stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. We present our dataset and provide insights on how to tackle the problem of automatic colorectal polyps characterization.
The best medical AI research (that you probably haven't heard of)
I've been talking in recent posts about how our typical methods of testing AI systems are inadequate and potentially unsafe. In particular, I've complained that all of the headline-grabbing papers so far only do controlled experiments, so we don't how the AI systems will perform on real patients. Today I am going to highlight a piece of work that has not received much attention, but actually went "all the way" and tested an AI system in clinical practice, assessing clinical outcomes. They did an actual clinical trial! Big news … so why haven't you heard about it?
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.38)
- Health & Medicine > Diagnostic Medicine (0.35)
- Health & Medicine > Therapeutic Area > Oncology (0.31)
Diagnosing Colorectal Polyps in the Wild with Capsule Networks
LaLonde, Rodney, Kandel, Pujan, Spampinato, Concetto, Wallace, Michael B., Bagci, Ulas
Colorectal cancer, largely arising from precursor lesions called polyps, remains one of the leading causes of cancer-related death worldwide. Current clinical standards require the resection and histopathological analysis of polyps due to test accuracy and sensitivity of optical biopsy methods falling substantially below recommended levels. In this study, we design a novel capsule network architecture (D-Caps) to improve the viability of optical biopsy of colorectal polyps. Our proposed method introduces several technical novelties including a novel capsule architecture with a capsule-average pooling (CAP) method to improve efficiency in large-scale image classification. We demonstrate improved results over the previous state-of-the-art convolutional neural network (CNN) approach by as much as 43%. This work provides an important benchmark on the new Mayo Polyp dataset, a significantly more challenging and larger dataset than previous polyp studies, with results stratified across all available categories, imaging devices and modalities, and focus modes to promote future direction into AI-driven colorectal cancer screening systems. Code is publicly available at https://github.com/lalonderodney/D-Caps .
- North America > United States > Florida > Duval County > Jacksonville (0.04)
- Europe > Italy (0.04)
- Asia > Japan (0.04)
- Research Report > New Finding (0.49)
- Research Report > Experimental Study (0.47)
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Colorectal Cancer (0.74)