Xu, Yanwu
Attention-based Saliency Hashing for Ophthalmic Image Retrieval
Fang, Jiansheng, Xu, Yanwu, Zhang, Xiaoqing, Hu, Yan, Liu, Jiang
Deep hashing methods have been proved to be effective for the large-scale medical image search assisting reference-based diagnosis for clinicians. However, when the salient region plays a maximal discriminative role in ophthalmic image, existing deep hashing methods do not fully exploit the learning ability of the deep network to capture the features of salient regions pointedly. The different grades or classes of ophthalmic images may be share similar overall performance but have subtle differences that can be differentiated by mining salient regions. To address this issue, we propose a novel end-to-end network, named Attention-based Saliency Hashing (ASH), for learning compact hash-code to represent ophthalmic images. ASH embeds a spatial-attention module to focus more on the representation of salient regions and highlights their essential role in differentiating ophthalmic images. Benefiting from the spatial-attention module, the information of salient regions can be mapped into the hash-code for similarity calculation. In the training stage, we input the image pairs to share the weights of the network, and a pairwise loss is designed to maximize the discriminability of the hash-code. In the retrieval stage, ASH obtains the hash-code by inputting an image with an end-to-end manner, then the hash-code is used to similarity calculation to return the most similar images. Extensive experiments on two different modalities of ophthalmic image datasets demonstrate that the proposed ASH can further improve the retrieval performance compared to the state-of-the-art deep hashing methods due to the huge contributions of the spatial-attention module.
Twin Auxiliary Classifiers GAN
Gong, Mingming, Xu, Yanwu, Li, Chunyuan, Zhang, Kun, Batmanghelich, Kayhan
Conditional generative models enjoy remarkable progress over the past few years. One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN), which generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier. However, the diversity of the generated samples by AC-GAN tends to decrease as the number of classes increases, hence limiting its power on large-scale data. In this paper, we identify the source of the low diversity issue theoretically and propose a practical solution to solve the problem. We show that the auxiliary classifier in AC-GAN imposes perfect separability, which is disadvantageous when the supports of the class distributions have significant overlap. To address the issue, we propose Twin Auxiliary Classifiers Generative Adversarial Net (TAC-GAN) that further benefits from a new player that interacts with other players (the generator and the discriminator) in GAN. Theoretically, we demonstrate that TAC-GAN can effectively minimize the divergence between the generated and real-data distributions. Extensive experimental results show that our TAC-GAN can successfully replicate the true data distributions on simulated data, and significantly improves the diversity of class-conditional image generation on real datasets.
Generative-Discriminative Complementary Learning
Xu, Yanwu, Gong, Mingming, Chen, Junxiang, Liu, Tongliang, Zhang, Kun, Batmanghelich, Kayhan
Majority of state-of-the-art deep learning methods for vision applications are discriminative approaches, which model the conditional distribution. The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases. In this paper, we study the complementary learning problem. Unlike ordinary labels, complementary labels are easy to obtain because an annotator only needs to provide a yes/no answer to a randomly chosen candidate class for each instance. We propose a generative-discriminative complementary learning method that estimates the ordinary labels by modeling both the conditional (discriminative) and instance (generative) distributions. Our method, we call Complementary Conditional GAN (CCGAN), improves the accuracy of predicting ordinary labels and is able to generate high quality instances in spite of weak supervision. In addition to the extensive empirical studies, we also theoretically show that our model can retrieve the true conditional distribution from the complementarily-labeled data.
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Bakas, Spyridon, Reyes, Mauricio, Jakab, Andras, Bauer, Stefan, Rempfler, Markus, Crimi, Alessandro, Shinohara, Russell Takeshi, Berger, Christoph, Ha, Sung Min, Rozycki, Martin, Prastawa, Marcel, Alberts, Esther, Lipkova, Jana, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan, Wiest, Roland, Kirschke, Jan, Wiestler, Benedikt, Colen, Rivka, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-Andre, Mahajan, Abhishek, Baid, Ujjwal, Kwon, Dongjin, Agarwal, Manu, Alam, Mahbubul, Albiol, Alberto, Albiol, Antonio, Alex, Varghese, Tran, Tuan Anh, Arbel, Tal, Avery, Aaron, B., Pranjal, Banerjee, Subhashis, Batchelder, Thomas, Batmanghelich, Kayhan, Battistella, Enzo, Bendszus, Martin, Benson, Eze, Bernal, Jose, Biros, George, Cabezas, Mariano, Chandra, Siddhartha, Chang, Yi-Ju, Chazalon, Joseph, Chen, Shengcong, Chen, Wei, Chen, Jefferson, Cheng, Kun, Christoph, Meinel, Chylla, Roger, Clérigues, Albert, Costa, Anthony, Cui, Xiaomeng, Dai, Zhenzhen, Dai, Lutao, Deutsch, Eric, Ding, Changxing, Dong, Chao, Dudzik, Wojciech, Estienne, Théo, Shin, Hyung Eun, Everson, Richard, Fabrizio, Jonathan, Fang, Longwei, Feng, Xue, Fidon, Lucas, Fridman, Naomi, Fu, Huan, Fuentes, David, Gering, David G, Gao, Yaozong, Gates, Evan, Gholami, Amir, Gong, Mingming, González-Villá, Sandra, Pauloski, J. Gregory, Guan, Yuanfang, Guo, Sheng, Gupta, Sudeep, Thakur, Meenakshi H, Maier-Hein, Klaus H., Han, Woo-Sup, He, Huiguang, Hernández-Sabaté, Aura, Herrmann, Evelyn, Himthani, Naveen, Hsu, Winston, Hsu, Cheyu, Hu, Xiaojun, Hu, Xiaobin, Hu, Yan, Hu, Yifan, Hua, Rui, Huang, Teng-Yi, Huang, Weilin, Huo, Quan, HV, Vivek, Isensee, Fabian, Islam, Mobarakol, Albiol, Francisco J., Wang, Chiatse J., Jambawalikar, Sachin, Jose, V Jeya Maria, Jian, Weijian, Jin, Peter, Jungo, Alain, Nuechterlein, Nicholas K, Kao, Po-Yu, Kermi, Adel, Keutzer, Kurt, Khened, Mahendra, Kickingereder, Philipp, King, Nik, Knapp, Haley, Knecht, Urspeter, Kohli, Lisa, Kong, Deren, Kong, Xiangmao, Koppers, Simon, Kori, Avinash, Krishnamurthi, Ganapathy, Kumar, Piyush, Kushibar, Kaisar, Lachinov, Dmitrii, Lee, Joon, Lee, Chengen, Lee, Yuehchou, Lefkovits, Szidonia, Lefkovits, Laszlo, Li, Tengfei, Li, Hongwei, Li, Wenqi, Li, Hongyang, Li, Xiaochuan, Lin, Zheng-Shen, Lin, Fengming, Liu, Chang, Liu, Boqiang, Liu, Xiang, Liu, Mingyuan, Liu, Ju, Lladó, Xavier, Luo, Lin, Iftekharuddin, Khan M., Tsai, Yuhsiang M., Ma, Jun, Ma, Kai, Mackie, Thomas, Mahmoudi, Issam, Marcinkiewicz, Michal, McKinley, Richard, Mehta, Sachin, Mehta, Raghav, Meier, Raphael, Merhof, Dorit, Meyer, Craig, Mitra, Sushmita, Moiyadi, Aliasgar, Mrukwa, Grzegorz, Monteiro, Miguel A. B., Myronenko, Andriy, Carver, Eric N, Nalepa, Jakub, Ngo, Thuyen, Niu, Chen, Oermann, Eric, Oliveira, Arlindo, Oliver, Arnau, Ourselin, Sebastien, French, Andrew P., Pound, Michael P., Pridmore, Tony P., Serrano-Rubio, Juan Pablo, Paragios, Nikos, Paschke, Brad, Pei, Linmim, Peng, Suting, Pham, Bao, Piella, Gemma, Pillai, G. N., Piraud, Marie, Popli, Anmol, Prčkovska, Vesna, Puch, Santi, Puybareau, Élodie, Qiao, Xu, Suter, Yannick R, Scott, Matthew R., Rane, Swapnil, Rebsamen, Michael, Ren, Hongliang, Ren, Xuhua, Rezaei, Mina, Lorenzo, Pablo Ribalta, Rippel, Oliver, Robert, Charlotte, Choudhury, Ahana Roy, Jackson, Aaron S., Manjunath, B. S., Salem, Mostafa, Salvi, Joaquim, Sánchez, Irina, Schellingerhout, Dawid, Shboul, Zeina, Shen, Haipeng, Shen, Dinggang, Shenoy, Varun, Shi, Feng, Shu, Hai, Snyder, James, Han, Il Song, Soni, Mehul, Stawiaski, Jean, Subramanian, Shashank, Sun, Li, Sun, Roger, Sun, Jiawei, Sun, Kay, Sun, Yu, Sun, Guoxia, Sun, Shuang, Park, Moo Sung, Szilagyi, Laszlo, Talbar, Sanjay, Tao, Dacheng, Tao, Dacheng, Khadir, Mohamed Tarek, Thakur, Siddhesh, Tochon, Guillaume, Tran, Tuan, Tseng, Kuan-Lun, Turlapov, Vadim, Tustison, Nicholas, Shankar, B. Uma, Vakalopoulou, Maria, Valverde, Sergi, Vanguri, Rami, Vasiliev, Evgeny, Vercauteren, Tom, Vidyaratne, Lasitha, Vivekanandan, Ajeet, Wang, Guotai, Wang, Qian, Wang, Weichung, Wen, Ning, Wen, Xin, Weninger, Leon, Wick, Wolfgang, Wu, Shaocheng, Wu, Qiang, Xia, Yong, Xu, Yanwu, Xu, Xiaowen, Xu, Peiyuan, Yang, Tsai-Ling, Yang, Xiaoping, Yang, Hao-Yu, Yang, Junlin, Yang, Haojin, Yao, Hongdou, Young-Moxon, Brett, Yue, Xiangyu, Zhang, Songtao, Zhang, Angela, Zhang, Kun, Zhang, Xuejie, Zhang, Lichi, Zhang, Xiaoyue, Zhao, Sicheng, Zhao, Yu, Zheng, Yefeng, Zhong, Liming, Zhou, Chenhong, Zhou, Xiaobing, Zhu, Hongtu, Zong, Weiwei, Kalpathy-Cramer, Jayashree, Farahani, Keyvan, Davatzikos, Christos, van Leemput, Koen, Menze, Bjoern
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.