Rothrock, Brandon
Virchow: A Million-Slide Digital Pathology Foundation Model
Vorontsov, Eugene, Bozkurt, Alican, Casson, Adam, Shaikovski, George, Zelechowski, Michal, Liu, Siqi, Severson, Kristen, Zimmermann, Eric, Hall, James, Tenenholtz, Neil, Fusi, Nicolo, Mathieu, Philippe, van Eck, Alexander, Lee, Donghun, Viret, Julian, Robert, Eric, Wang, Yi Kan, Kunz, Jeremy D., Lee, Matthew C. H., Bernhard, Jan, Godrich, Ran A., Oakley, Gerard, Millar, Ewan, Hanna, Matthew, Retamero, Juan, Moye, William A., Yousfi, Razik, Kanan, Christopher, Klimstra, David, Rothrock, Brandon, Fuchs, Thomas J.
The use of artificial intelligence to enable precision medicine and decision support systems through the analysis of pathology images has the potential to revolutionize the diagnosis and treatment of cancer. Such applications will depend on models' abilities to capture the diverse patterns observed in pathology images. To address this challenge, we present Virchow, a foundation model for computational pathology. Using self-supervised learning empowered by the DINOv2 algorithm, Virchow is a vision transformer model with 632 million parameters trained on 1.5 million hematoxylin and eosin stained whole slide images from diverse tissue and specimen types, which is orders of magnitude more data than previous works. The Virchow model enables the development of a pan-cancer detection system with 0.949 overall specimen-level AUC across 17 different cancer types, while also achieving 0.937 AUC on 7 rare cancer types. The Virchow model sets the state-of-the-art on the internal and external image tile level benchmarks and slide level biomarker prediction tasks. The gains in performance highlight the importance of training on massive pathology image datasets, suggesting scaling up the data and network architecture can improve the accuracy for many high-impact computational pathology applications where limited amounts of training data are available.
Privacy-Preserving Human Activity Recognition from Extreme Low Resolution
Ryoo, Michael S. (Indiana University) | Rothrock, Brandon (Jet Propulsion Laboratory, California Institute of Technology,) | Fleming, Charles (Xi'an Jiaotong-Liverpool University) | Yang, Hyun Jong (Ulsan National Institute of Science and Technology)
Privacy protection from surreptitious video recordings is an important societal challenge. We desire a computer vision system (e.g., a robot) that can recognize human activities and assist our daily life, yet ensure that it is not recording video that may invade our privacy. This paper presents a fundamental approach to address such contradicting objectives: human activity recognition while only using extreme low-resolution (e.g., 16x12) anonymized videos. We introduce the paradigm of inverse super resolution (ISR), the concept of learning the optimal set of image transformations to generate multiple low-resolution (LR) training videos from a single video. Our ISR learns different types of sub-pixel transformations optimized for the activity classification, allowing the classifier to best take advantage of existing high-resolution videos (e.g., YouTube videos) by creating multiple LR training videos tailored for the problem. We experimentally confirm that the paradigm of inverse super resolution is able to benefit activity recognition from extreme low-resolution videos.