carcinoma
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- (2 more...)
Assessing the Feasibility of Early Cancer Detection Using Routine Laboratory Data: An Evaluation of Machine Learning Approaches on an Imbalanced Dataset
The development of accessible screening tools for early cancer detection in dogs represents a significant challenge in veterinary medicine. Routine laboratory data offer a promising, low-cost source for such tools, but their utility is hampered by the non-specificity of individual biomarkers and the severe class imbalance inherent in screening populations. This study assesses the feasibility of cancer risk classification using the Golden Retriever Lifetime Study (GRLS) cohort under real-world constraints, including the grouping of diverse cancer types and the inclusion of post-diagnosis samples. A comprehensive benchmark evaluation was conducted, systematically comparing 126 analytical pipelines that comprised various machine learning models, feature selection methods, and data balancing techniques. Data were partitioned at the patient level to prevent leakage. The optimal model, a Logistic Regression classifier with class weighting and recursive feature elimination, demonstrated moderate ranking ability (AUROC = 0.815; 95% CI: 0.793-0.836) but poor clinical classification performance (F1-score = 0.25, Positive Predictive Value = 0.15). While a high Negative Predictive Value (0.98) was achieved, insufficient recall (0.79) precludes its use as a reliable rule-out test. Interpretability analysis with SHapley Additive exPlanations (SHAP) revealed that predictions were driven by non-specific features like age and markers of inflammation and anemia. It is concluded that while a statistically detectable cancer signal exists in routine lab data, it is too weak and confounded for clinically reliable discrimination from normal aging or other inflammatory conditions. This work establishes a critical performance ceiling for this data modality in isolation and underscores that meaningful progress in computational veterinary oncology will require integration of multi-modal data sources.
- Asia > China > Jilin Province (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
PathReasoning: A multimodal reasoning agent for query-based ROI navigation on whole-slide images
Zhang, Kunpeng, Xu, Hanwen, Wang, Sheng
Deciphering tumor microenvironment from Whole Slide Images (WSIs) is intriguing as it is key to cancer diagnosis, prognosis and treatment response. While these gigapixel images on one hand offer a comprehensive portrait of cancer, on the other hand, the extremely large size, as much as more than 10 billion pixels, make it challenging and time-consuming to navigate to corresponding regions to support diverse clinical inspection. Inspired by pathologists who conducted navigation on WSIs with a combination of sampling, reasoning and self-reflection, we proposed "PathReasoning", a multi-modal reasoning agent that iteratively navigates across WSIs through multiple rounds of reasoning and refinements. Specifically, starting with randomly sampled candidate regions, PathReasoning reviews current selections with self-reflection, reasoning over the correspondence between visual observations and clinical questions, and concludes by proposing new regions to explore. Across rounds, PathReasoning builds a reasoning chain that gradually directs attention to diagnostically relevant areas. PathReasoning turns each whole slide into a sequence of question-guided views, allowing the model to efficiently find informative ROIs within a fixed number of steps, without the need for dense pixel-level annotations. PathReasoning can substantially outperform strong ROI-selection approaches by 6.7% and 3.1% of AUROC on subtyping and longitudinal analysis tasks. The high-quality ROIs further support accurate report generation on breast cancer, significantly outperforming the standard GPT-4o by 10% in accuracy. PathReasoning prioritizes question-specific regions and constructs interpretable reasoning chains, supporting efficient slide review, consistent diagnostic interpretations, comprehensive reporting, and evidence traceability in digital pathology.
- North America > United States > Washington > King County > Seattle (0.14)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Europe > Norway > Norwegian Sea (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.93)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Pillar-0: A New Frontier for Radiology Foundation Models
Agrawal, Kumar Krishna, Liu, Longchao, Lian, Long, Nercessian, Michael, Harguindeguy, Natalia, Wu, Yufu, Mikhael, Peter, Lin, Gigin, Sequist, Lecia V., Fintelmann, Florian, Darrell, Trevor, Bai, Yutong, Chung, Maggie, Yala, Adam
Radiology plays an integral role in modern medicine, yet rising imaging volumes have far outpaced workforce growth. Foundation models offer a path toward assisting with the full spectrum of radiology tasks, but existing medical models remain limited: they process volumetric CT and MRI as low-fidelity 2D slices, discard critical grayscale contrast information, and lack evaluation frameworks that reflect real clinical practice. We introduce Pillar-0, a radiology foundation model pretrained on 42,990 abdomen-pelvis CTs, 86,411 chest CTs, 14,348 head CTs, and 11,543 breast MRIs from a large academic center, together with RATE, a scalable framework that extracts structured labels for 366 radiologic findings with near-perfect accuracy using LLMs. Across internal test sets of 14,230 abdomen-pelvis CTs, 10,646 chest CTs, 4,906 head CTs, and 1,585 breast MRIs, Pillar-0 establishes a new performance frontier, achieving mean AUROCs of 86.4, 88.0, 90.1, and 82.9, outperforming MedGemma (Google), MedImageInsight (Microsoft), Lingshu (Alibaba), and Merlin (Stanford) by 7.8-15.8 AUROC points and ranking best in 87.2\% (319/366) tasks. Pillar-0 similarly outperforms all baselines in an external validation on the Stanford Abdominal CT dataset, including Merlin (82.2 vs 80.6 AUROC). Pillar-0 extends to tasks beyond its pretraining, such as long-horizon lung cancer risk prediction, where it improves upon the state-of-the-art Sybil by 3.0 C-index points on NLST, and generalizes with gains of 5.9 (MGH) and 1.9 (CGMH). In brain hemorrhage detection, Pillar-0 obtained a >95 AUROC when using only 1/20th of the data of the next most sample efficient baseline. Pillar-0 and RATE together provide an open, clinically rigorous foundation for building high-performance radiology systems, enabling applications that were previously infeasible due to computational, data, and evaluation constraints.
- North America > United States > Massachusetts (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- South America > Brazil > São Paulo (0.04)
- Asia > Taiwan > Taiwan Province > Taipei (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
PRISM2: Unlocking Multi-Modal General Pathology AI with Clinical Dialogue
Vorontsov, Eugene, Shaikovski, George, Casson, Adam, Viret, Julian, Zimmermann, Eric, Tenenholtz, Neil, Wang, Yi Kan, Bernhard, Jan H., Godrich, Ran A., Retamero, Juan A., Shia, Jinru, Gonen, Mithat, Weiser, Martin R., Klimstra, David S., Yousfi, Razik, Fusi, Nicolo, Fuchs, Thomas J., Severson, Kristen, Liu, Siqi
Recent rapid progress in the field of computational pathology has been enabled by foundation models. These models are beginning to move beyond encoding image patches towards whole-slide understanding but their clinical utility remains limited. In this work, we present PRISM2, a multimodal slide-level foundation model trained on data from 700,000 diagnostic specimen-report pairs, the largest vision (2.3 million whole slide images) and language (14M question-answer pairs) histopathology dataset to date. By learning through clinical-dialogue supervision, PRISM2 aligns histomorphologic features with the language of diagnostic reasoning, producing slide-level representations that support both direct diagnostic question-answering and transferable embeddings for downstream tasks. Without additional training, PRISM2 matches or exceeds the cancer-detection performance of clinical-grade products. This is observed without loss of generality on other tasks, where PRISM2 achieves top performance. Finally, using survival prediction as the example, we show that task-specific finetuning with a large dataset can outperform task-specific models, further improving performance. These results demonstrate how language-supervised pretraining provides a scalable, clinically grounded signal for learning generalizable pathology representations, bridging human diagnostic reasoning and foundation-model performance.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- Europe (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- (2 more...)
Robust Pan-Cancer Mitotic Figure Detection with YOLOv12
Bourgade, Raphaël, Balezo, Guillaume, Feki, Hana, Monier, Lily, Blons, Matthieu, Blondel, Alice, Loussouarn, Delphine, Vincent-Salomon, Anne, Walter, Thomas
Detecting mitotic figures (MFs) in histopathology images remains a challenging task. Their quantification traditionally relies on the manual identification of "hot spots" by pathologists, followed by visual counting--an approach that is inherently subjective and may not reliably reflect the true prolifer-ative activity of a tumor. With the rise of digital pathology and artificial intelligence, numerous efforts have been made to automate mitosis detection in order to enhance accuracy, reproducibility, and scalability. Among these, the MItosis DOmain Generalization (MIDOG) challenges have emerged as a key benchmark for evaluating the generalizability of detection algorithms under realistic domain shifts. The 2021 edition (1) addressed scanner-induced variability using breast cancer WSIs, while the 2022 edition (2) extended the scope to include multiple tissue types and species, introducing further biological diversity. The 2025 MIDOG challenge (3) builds on these foundations with the most comprehensive mitosis-annotated dataset to date, and introduces two tasks: (1) detecting mitotic figures in arbitrary tumor tissue, and (2) determining whether a mitotic figure is atypical or normal. These tasks represent a significant step toward developing robust mitosis detection systems that generalize across diverse and complex histological conditions. In this work, we present a high-performance detection pipeline based on the YOLOv12 object detection architecture.
- Europe > France > Île-de-France > Paris > Paris (0.05)
- Europe > France > Pays de la Loire > Loire-Atlantique > Nantes (0.05)
Generalisation of automatic tumour segmentation in histopathological whole-slide images across multiple cancer types
Skrede, Ole-Johan, Pradhan, Manohar, Isaksen, Maria Xepapadakis, Hveem, Tarjei Sveinsgjerd, Vlatkovic, Ljiljana, Nesbakken, Arild, Lindemann, Kristina, Kristensen, Gunnar B, Kasius, Jenneke, Zeimet, Alain G, Brustugun, Odd Terje, Busund, Lill-Tove Rasmussen, Richardsen, Elin H, Haug, Erik Skaaheim, Brennhovd, Bjørn, Rewcastle, Emma, Lillesand, Melinda, Kvikstad, Vebjørn, Janssen, Emiel, Kerr, David J, Liestøl, Knut, Albregtsen, Fritz, Kleppe, Andreas
Deep learning is expected to aid pathologists by automating tasks such as tumour segmentation. We aimed to develop one universal tumour segmentation model for histopathological images and examine its performance in different cancer types. The model was developed using over 20 000 whole-slide images from over 4 000 patients with colorectal, endometrial, lung, or prostate carcinoma. Performance was validated in pre-planned analyses on external cohorts with over 3 000 patients across six cancer types. Exploratory analyses included over 1 500 additional patients from The Cancer Genome Atlas. Average Dice coefficient was over 80% in all validation cohorts with en bloc resection specimens and in The Cancer Genome Atlas cohorts. No loss of performance was observed when comparing the universal model with models specialised on single cancer types. In conclusion, extensive and rigorous evaluations demonstrate that generic tumour segmentation by a single model is possible across cancer types, patient populations, sample preparations, and slide scanners.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Norway > Eastern Norway > Oslo (0.06)
- Europe > Norway > Western Norway > Rogaland > Stavanger (0.05)
- (11 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Prostate Cancer (0.48)
- Health & Medicine > Therapeutic Area > Oncology > Lung Cancer (0.46)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- (2 more...)
GAS-MIL: Group-Aggregative Selection Multi-Instance Learning for Ensemble of Foundation Models in Digital Pathology Image Analysis
Quan, Peiran, Gu, Zifan, Zhao, Zhuo, Zhou, Qin, Yang, Donghan M., Rong, Ruichen, Xie, Yang, Xiao, Guanghua
Foundation models (FMs) have transformed computational pathology by providing powerful, general - purpose feature extractors. However, adapting and benchmarking individual FMs for specific diagnostic tasks is often time - consuming and resource - intensive, espe cially given their scale and diversity. To address this challenge, we introduce Group - Aggregative Selection Multi - Instance Learning (GAS - MIL), a flexible ensemble framework that seamlessly integrates features from multiple FMs, preserving their complementa ry strengths without requiring manual feature selection or extensive task - specific fine - tuning. Across classification tasks in three cancer datasets -- prostate (PANDA), ovarian (UBC - OCEAN), and breast (TCGA - BrCa) -- GAS - MIL consistently achieves superior or on - par performance relative to individual FMs and established MIL methods, demonstrating its robustness and generalizability. By enabling efficient int egration of heterogeneous FMs, GAS - MIL streamlines model deployment for pathology and provides a scalable foundation for future multimodal and precision oncology applications.
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
A Novel Recurrent Neural Network Framework for Prediction and Treatment of Oncogenic Mutation Progression
Parthasarathy, Rishab, Bhowmik, Achintya
Despite significant medical advancements, cancer remains the second leading cause of death, with over 600,000 deaths per year in the US. One emerging field, pathway analysis, is promising but still relies on manually derived wet lab data, which is time-consuming to acquire. This work proposes an efficient, effective end-to-end framework for Artificial Intelligence (AI) based pathway analysis that predicts both cancer severity and mutation progression, thus recommending possible treatments. The proposed technique involves a novel combination of time-series machine learning models and pathway analysis. First, mutation sequences were isolated from The Cancer Genome Atlas (TCGA) Database. Then, a novel preprocessing algorithm was used to filter key mutations by mutation frequency. This data was fed into a Recurrent Neural Network (RNN) that predicted cancer severity. Then, the model probabilistically used the RNN predictions, information from the preprocessing algorithm, and multiple drug-target databases to predict future mutations and recommend possible treatments. This framework achieved robust results and Receiver Operating Characteristic (ROC) curves (a key statistical metric) with accuracies greater than 60%, similar to existing cancer diagnostics. In addition, preprocessing played an instrumental role in isolating important mutations, demonstrating that each cancer stage studied may contain on the order of a few-hundred key driver mutations, consistent with current research. Heatmaps based on predicted gene frequency were also generated, highlighting key mutations in each cancer. Overall, this work is the first to propose an efficient, cost-effective end-to-end framework for projecting cancer progression and providing possible treatments without relying on expensive, time-consuming wet lab work.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > Canada > Alberta (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- (9 more...)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.46)
- Government > Regional Government > North America Government > United States Government (0.67)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (0.47)