Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution ($100,000^2$ pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image-level labels are available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection.
Mitotic count is a commonly used method to assess the level of progression of breast cancer, which is now the fourth most prevalent cancer. Unfortunately, counting mitosis is a tedious and subjective task with poor reproducibility, especially for non-experts. Luckily, since the machine can read and compare more data with greater efficiency this could be the next modern technique to count mitosis. Furthermore, technological advancements in medicine have led to the increase in image data available for use in training. In this work, we propose a network constructed using a similar approach to one that has been used for image fraud detection with the segmented image map as the second stream input to Faster RCNN. This region-based detection model combines a fully convolutional Region Proposal Network to generate proposals and a classification network to classify each of these proposals as containing mitosis or not. Features from both streams are fused in the bilinear pooling layer to maintain the spatial concurrence of each. After training this model on the ICPR 2014 MITOSIS contest dataset, we received an F-measure score of 0.507, higher than both the winners score and scores from recent tests on the same data. Our method is clinically applicable, taking only around five min per ten full High Power Field slides when tested on a Quadro P6000 cloud GPU.