Goto

Collaborating Authors

 stain



Heatmap Guided Query Transformers for Robust Astrocyte Detection across Immunostains and Resolutions

Zhang, Xizhe, Zhu, Jiayang

arXiv.org Artificial Intelligence

Astrocytes are critical glial cells whose altered morphology and density are hallmarks of many neurological disorders. However, their intricate branching and stain dependent variability make automated detection of histological images a highly challenging task. To address these challenges, we propose a hybrid CNN Transformer detector that combines local feature extraction with global contextual reasoning. A heatmap guided query mechanism generates spatially grounded anchors for small and faint astrocytes, while a lightweight Transformer module improves discrimination in dense clusters. Evaluated on ALDH1L1 and GFAP stained astrocyte datasets, the model consistently outperformed Faster R-CNN, YOLOv11 and DETR, achieving higher sensitivity with fewer false positives, as confirmed by FROC analysis. These results highlight the potential of hybrid CNN Transformer architectures for robust astrocyte detection and provide a foundation for advanced computational pathology tools.


Blue light beats bleach for yellow stains

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Sweat stains are a white t-shirt's worst enemy. Unfortunately, that notorious fabric yellowing is often unavoidable due to the combination of oleic acid, squalene, and other organic compounds found in your skin oil and sweat. Factor in a chance encounter with natural food pigments like the carotene and lycopene found in tomatoes and oranges, and it's probably only a matter of time before you'll need to break out the bleach or hydrogen peroxide. Even then, the results are often unsatisfactory for your (once) vibrant white shirts.


Cross-Domain Image Synthesis: Generating H&E from Multiplex Biomarker Imaging

Saurav, Jillur Rahman, Nasr, Mohammad Sadegh, Luber, Jacob M.

arXiv.org Artificial Intelligence

While multiplex immunofluorescence (mIF) imaging provides deep, spatially-resolved molecular data, integrating this information with the morphological standard of Hematoxylin & Eosin (H&E) can be very important for obtaining complementary information about the underlying tissue. Generating a virtual H&E stain from mIF data offers a powerful solution, providing immediate morphological context. Crucially, this approach enables the application of the vast ecosystem of H&E-based computer-aided diagnosis (CAD) tools to analyze rich molecular data, bridging the gap between molecular and morphological analysis. In this work, we investigate the use of a multi-level Vector-Quantized Generative Adversarial Network (VQGAN) to create high-fidelity virtual H&E stains from mIF images. We rigorously evaluated our VQGAN against a standard conditional GAN (cGAN) baseline on two publicly available colorectal cancer datasets, assessing performance on both image similarity and functional utility for downstream analysis. Our results show that while both architectures produce visually plausible images, the virtual stains generated by our VQGAN provide a more effective substrate for computer-aided diagnosis. Specifically, downstream nuclei segmentation and semantic preservation in tissue classification tasks performed on VQGAN-generated images demonstrate superior performance and agreement with ground-truth analysis compared to those from the cGAN. This work establishes that a multi-level VQGAN is a robust and superior architecture for generating scientifically useful virtual stains, offering a viable pathway to integrate the rich molecular data of mIF into established and powerful H&E-based analytical workflows.


Staining and locking computer vision models without retraining

Sutton, Oliver J., Zhou, Qinghua, Leete, George, Gorban, Alexander N., Tyukin, Ivan Y.

arXiv.org Artificial Intelligence

We introduce new methods of staining and locking computer vision models, to protect their owners' intellectual property. Staining, also known as watermarking, embeds secret behaviour into a model which can later be used to identify it, while locking aims to make a model unusable unless a secret trigger is inserted into input images. Unlike existing methods, our algorithms can be used to stain and lock pre-trained models without requiring fine-tuning or retraining, and come with provable, computable guarantees bounding their worst-case false positive rates. The stain and lock are implemented by directly modifying a small number of the model's weights and have minimal impact on the (unlocked) model's performance. Locked models are unlocked by inserting a small `trigger patch' into the corner of the input image. We present experimental results showing the efficacy of our methods and demonstrating their practical performance on a variety of computer vision models.


At-home test works like coffee rings to spot serious illness faster

FOX News

HHS Secretary told members of Congress on Tuesday that wearables are "a way of people can take control over their own health." Have you ever noticed how a spilled cup of coffee leaves behind a telltale brown ring? While those stains might be annoying, the science behind them, known as the coffee ring effect, has sparked innovations in health technology. UC Berkeley researchers recently turned this everyday phenomenon into a breakthrough medical test, making rapid and reliable disease detection as easy as brewing your morning coffee. Curious how a simple coffee stain could inspire cutting-edge diagnostics and revolutionize at-home testing?


Enhancing Apple's Defect Classification: Insights from Visible Spectrum and Narrow Spectral Band Imaging

Coello, Omar, Coronel, Moisés, Carpio, Darío, Vintimilla, Boris, Chuquimarca, Luis

arXiv.org Artificial Intelligence

This study addresses the classification of defects in apples as a crucial measure to mitigate economic losses and optimize the food supply chain. An innovative approach is employed that integrates images from the visible spectrum and 660 nm spectral wavelength to enhance accuracy and efficiency in defect classification. The methodology is based on the use of Single-Input and Multi-Inputs convolutional neural networks (CNNs) to validate the proposed strategies. Steps include image acquisition and preprocessing, classification model training, and performance evaluation. Results demonstrate that defect classification using the 660 nm spectral wavelength reveals details not visible in the entire visible spectrum. It is seen that the use of the appropriate spectral range in the classification process is slightly superior to the entire visible spectrum. The MobileNetV1 model achieves an accuracy of 98.80\% on the validation dataset versus the 98.26\% achieved using the entire visible spectrum. Conclusions highlight the potential to enhance the method by capturing images with specific spectral ranges using filters, enabling more effective network training for classification task. These improvements could further enhance the system's capability to identify and classify defects in apples.


Label-free evaluation of lung and heart transplant biopsies using virtual staining

Li, Yuzhu, Pillar, Nir, Liu, Tairan, Ma, Guangdong, Qi, Yuxuan, de Haan, Kevin, Zhang, Yijie, Yang, Xilin, Correa, Adrian J., Xiao, Guangqian, Jen, Kuang-Yu, Iczkowski, Kenneth A., Wu, Yulun, Wallace, William Dean, Ozcan, Aydogan

arXiv.org Artificial Intelligence

Organ transplantation serves as the primary therapeutic strategy for end-stage organ failures. However, allograft rejection is a common complication of organ transplantation. Histological assessment is essential for the timely detection and diagnosis of transplant rejection and remains the gold standard. Nevertheless, the traditional histochemical staining process is time-consuming, costly, and labor-intensive. Here, we present a panel of virtual staining neural networks for lung and heart transplant biopsies, which digitally convert autofluorescence microscopic images of label-free tissue sections into their brightfield histologically stained counterparts, bypassing the traditional histochemical staining process. Specifically, we virtually generated Hematoxylin and Eosin (H&E), Masson's Trichrome (MT), and Elastic Verhoeff-Van Gieson (EVG) stains for label-free transplant lung tissue, along with H&E and MT stains for label-free transplant heart tissue. Subsequent blind evaluations conducted by three board-certified pathologists have confirmed that the virtual staining networks consistently produce high-quality histology images with high color uniformity, closely resembling their well-stained histochemical counterparts across various tissue features. The use of virtually stained images for the evaluation of transplant biopsies achieved comparable diagnostic outcomes to those obtained via traditional histochemical staining, with a concordance rate of 82.4% for lung samples and 91.7% for heart samples. Moreover, virtual staining models create multiple stains from the same autofluorescence input, eliminating structural mismatches observed between adjacent sections stained in the traditional workflow, while also saving tissue, expert time, and staining costs.


Multistain Pretraining for Slide Representation Learning in Pathology

Jaume, Guillaume, Vaidya, Anurag, Zhang, Andrew, Song, Andrew H., Chen, Richard J., Sahai, Sharifa, Mo, Dandan, Madrigal, Emilio, Le, Long Phi, Mahmood, Faisal

arXiv.org Artificial Intelligence

Developing self-supervised learning (SSL) models that can learn universal and transferable representations of H&E gigapixel whole-slide images (WSIs) is becoming increasingly valuable in computational pathology. These models hold the potential to advance critical tasks such as few-shot classification, slide retrieval, and patient stratification. Existing approaches for slide representation learning extend the principles of SSL from small images (e.g., 224 x 224 patches) to entire slides, usually by aligning two different augmentations (or views) of the slide. Yet the resulting representation remains constrained by the limited clinical and biological diversity of the views. Instead, we postulate that slides stained with multiple markers, such as immunohistochemistry, can be used as different views to form a rich task-agnostic training signal. To this end, we introduce Madeleine, a multimodal pretraining strategy for slide representation learning. Madeleine is trained with a dual global-local cross-stain alignment objective on large cohorts of breast cancer samples (N=4,211 WSIs across five stains) and kidney transplant samples (N=12,070 WSIs across four stains). We demonstrate the quality of slide representations learned by Madeleine on various downstream evaluations, ranging from morphological and molecular classification to prognostic prediction, comprising 21 tasks using 7,299 WSIs from multiple medical centers. Code is available at https://github.com/mahmoodlab/MADELEINE.


Can virtual staining for high-throughput screening generalize?

Tonks, Samuel, Nguyer, Cuong, Hood, Steve, Musso, Ryan, Hopely, Ceridwen, Titus, Steve, Doan, Minh, Styles, Iain, Krull, Alexander

arXiv.org Artificial Intelligence

The large volume and variety of imaging data from high-throughput screening (HTS) in the pharmaceutical industry present an excellent resource for training virtual staining models. However, the potential of models trained under one set of experimental conditions to generalize to other conditions remains underexplored. This study systematically investigates whether data from three cell types (lung, ovarian, and breast) and two phenotypes (toxic and non-toxic conditions) commonly found in HTS can effectively train virtual staining models to generalize across three typical HTS distribution shifts: unseen phenotypes, unseen cell types, and the combination of both. Utilizing a dataset of 772,416 paired bright-field, cytoplasm, nuclei, and DNA-damage stain images, we evaluate the generalization capabilities of models across pixel-based, instance-wise, and biological-feature-based levels. Our findings indicate that training virtual nuclei and cytoplasm models on non-toxic condition samples not only generalizes to toxic condition samples but leads to improved performance across all evaluation levels compared to training on toxic condition samples. Generalization to unseen cell types shows variability depending on the cell type; models trained on ovarian or lung cell samples often perform well under other conditions, while those trained on breast cell samples consistently show poor generalization. Generalization to unseen cell types and phenotypes shows good generalization across all levels of evaluation compared to addressing unseen cell types alone. This study represents the first large-scale, data-centric analysis of the generalization capability of virtual staining models trained on diverse HTS datasets, providing valuable strategies for experimental training data generation.