Marcinkiewicz, Michal
ScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours
Zhu, Feiwen, Nowaczynski, Arkadiusz, Li, Rundong, Xin, Jie, Song, Yifei, Marcinkiewicz, Michal, Eryilmaz, Sukru Burc, Yang, Jun, Andersch, Michael
AlphaFold2 has been hailed as a breakthrough in protein folding. It can rapidly predict protein structures with lab-grade accuracy. However, its implementation does not include the necessary training code. OpenFold is the first trainable public reimplementation of AlphaFold. AlphaFold training procedure is prohibitively time-consuming, and gets diminishing benefits from scaling to more compute resources. In this work, we conducted a comprehensive analysis on the AlphaFold training procedure based on Openfold, identified that inefficient communications and overhead-dominated computations were the key factors that prevented the AlphaFold training from effective scaling. We introduced ScaleFold, a systematic training method that incorporated optimizations specifically for these factors. ScaleFold successfully scaled the AlphaFold training to 2080 NVIDIA H100 GPUs with high resource utilization. In the MLPerf HPC v3.0 benchmark, ScaleFold finished the OpenFold benchmark in 7.51 minutes, shown over $6\times$ speedup than the baseline. For training the AlphaFold model from scratch, ScaleFold completed the pretraining in 10 hours, a significant improvement over the seven days required by the original AlphaFold pretraining baseline.
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Bakas, Spyridon, Reyes, Mauricio, Jakab, Andras, Bauer, Stefan, Rempfler, Markus, Crimi, Alessandro, Shinohara, Russell Takeshi, Berger, Christoph, Ha, Sung Min, Rozycki, Martin, Prastawa, Marcel, Alberts, Esther, Lipkova, Jana, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan, Wiest, Roland, Kirschke, Jan, Wiestler, Benedikt, Colen, Rivka, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-Andre, Mahajan, Abhishek, Baid, Ujjwal, Kwon, Dongjin, Agarwal, Manu, Alam, Mahbubul, Albiol, Alberto, Albiol, Antonio, Alex, Varghese, Tran, Tuan Anh, Arbel, Tal, Avery, Aaron, B., Pranjal, Banerjee, Subhashis, Batchelder, Thomas, Batmanghelich, Kayhan, Battistella, Enzo, Bendszus, Martin, Benson, Eze, Bernal, Jose, Biros, George, Cabezas, Mariano, Chandra, Siddhartha, Chang, Yi-Ju, Chazalon, Joseph, Chen, Shengcong, Chen, Wei, Chen, Jefferson, Cheng, Kun, Christoph, Meinel, Chylla, Roger, Clérigues, Albert, Costa, Anthony, Cui, Xiaomeng, Dai, Zhenzhen, Dai, Lutao, Deutsch, Eric, Ding, Changxing, Dong, Chao, Dudzik, Wojciech, Estienne, Théo, Shin, Hyung Eun, Everson, Richard, Fabrizio, Jonathan, Fang, Longwei, Feng, Xue, Fidon, Lucas, Fridman, Naomi, Fu, Huan, Fuentes, David, Gering, David G, Gao, Yaozong, Gates, Evan, Gholami, Amir, Gong, Mingming, González-Villá, Sandra, Pauloski, J. Gregory, Guan, Yuanfang, Guo, Sheng, Gupta, Sudeep, Thakur, Meenakshi H, Maier-Hein, Klaus H., Han, Woo-Sup, He, Huiguang, Hernández-Sabaté, Aura, Herrmann, Evelyn, Himthani, Naveen, Hsu, Winston, Hsu, Cheyu, Hu, Xiaojun, Hu, Xiaobin, Hu, Yan, Hu, Yifan, Hua, Rui, Huang, Teng-Yi, Huang, Weilin, Huo, Quan, HV, Vivek, Isensee, Fabian, Islam, Mobarakol, Albiol, Francisco J., Wang, Chiatse J., Jambawalikar, Sachin, Jose, V Jeya Maria, Jian, Weijian, Jin, Peter, Jungo, Alain, Nuechterlein, Nicholas K, Kao, Po-Yu, Kermi, Adel, Keutzer, Kurt, Khened, Mahendra, Kickingereder, Philipp, King, Nik, Knapp, Haley, Knecht, Urspeter, Kohli, Lisa, Kong, Deren, Kong, Xiangmao, Koppers, Simon, Kori, Avinash, Krishnamurthi, Ganapathy, Kumar, Piyush, Kushibar, Kaisar, Lachinov, Dmitrii, Lee, Joon, Lee, Chengen, Lee, Yuehchou, Lefkovits, Szidonia, Lefkovits, Laszlo, Li, Tengfei, Li, Hongwei, Li, Wenqi, Li, Hongyang, Li, Xiaochuan, Lin, Zheng-Shen, Lin, Fengming, Liu, Chang, Liu, Boqiang, Liu, Xiang, Liu, Mingyuan, Liu, Ju, Lladó, Xavier, Luo, Lin, Iftekharuddin, Khan M., Tsai, Yuhsiang M., Ma, Jun, Ma, Kai, Mackie, Thomas, Mahmoudi, Issam, Marcinkiewicz, Michal, McKinley, Richard, Mehta, Sachin, Mehta, Raghav, Meier, Raphael, Merhof, Dorit, Meyer, Craig, Mitra, Sushmita, Moiyadi, Aliasgar, Mrukwa, Grzegorz, Monteiro, Miguel A. B., Myronenko, Andriy, Carver, Eric N, Nalepa, Jakub, Ngo, Thuyen, Niu, Chen, Oermann, Eric, Oliveira, Arlindo, Oliver, Arnau, Ourselin, Sebastien, French, Andrew P., Pound, Michael P., Pridmore, Tony P., Serrano-Rubio, Juan Pablo, Paragios, Nikos, Paschke, Brad, Pei, Linmim, Peng, Suting, Pham, Bao, Piella, Gemma, Pillai, G. N., Piraud, Marie, Popli, Anmol, Prčkovska, Vesna, Puch, Santi, Puybareau, Élodie, Qiao, Xu, Suter, Yannick R, Scott, Matthew R., Rane, Swapnil, Rebsamen, Michael, Ren, Hongliang, Ren, Xuhua, Rezaei, Mina, Lorenzo, Pablo Ribalta, Rippel, Oliver, Robert, Charlotte, Choudhury, Ahana Roy, Jackson, Aaron S., Manjunath, B. S., Salem, Mostafa, Salvi, Joaquim, Sánchez, Irina, Schellingerhout, Dawid, Shboul, Zeina, Shen, Haipeng, Shen, Dinggang, Shenoy, Varun, Shi, Feng, Shu, Hai, Snyder, James, Han, Il Song, Soni, Mehul, Stawiaski, Jean, Subramanian, Shashank, Sun, Li, Sun, Roger, Sun, Jiawei, Sun, Kay, Sun, Yu, Sun, Guoxia, Sun, Shuang, Park, Moo Sung, Szilagyi, Laszlo, Talbar, Sanjay, Tao, Dacheng, Tao, Dacheng, Khadir, Mohamed Tarek, Thakur, Siddhesh, Tochon, Guillaume, Tran, Tuan, Tseng, Kuan-Lun, Turlapov, Vadim, Tustison, Nicholas, Shankar, B. Uma, Vakalopoulou, Maria, Valverde, Sergi, Vanguri, Rami, Vasiliev, Evgeny, Vercauteren, Tom, Vidyaratne, Lasitha, Vivekanandan, Ajeet, Wang, Guotai, Wang, Qian, Wang, Weichung, Wen, Ning, Wen, Xin, Weninger, Leon, Wick, Wolfgang, Wu, Shaocheng, Wu, Qiang, Xia, Yong, Xu, Yanwu, Xu, Xiaowen, Xu, Peiyuan, Yang, Tsai-Ling, Yang, Xiaoping, Yang, Hao-Yu, Yang, Junlin, Yang, Haojin, Yao, Hongdou, Young-Moxon, Brett, Yue, Xiangyu, Zhang, Songtao, Zhang, Angela, Zhang, Kun, Zhang, Xuejie, Zhang, Lichi, Zhang, Xiaoyue, Zhao, Sicheng, Zhao, Yu, Zheng, Yefeng, Zhong, Liming, Zhou, Chenhong, Zhou, Xiaobing, Zhu, Hongtu, Zong, Weiwei, Kalpathy-Cramer, Jayashree, Farahani, Keyvan, Davatzikos, Christos, van Leemput, Koen, Menze, Bjoern
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
Band Selection from Hyperspectral Images Using Attention-based Convolutional Neural Networks
Lorenzo, Pablo Ribalta, Tulczyjew, Lukasz, Marcinkiewicz, Michal, Nalepa, Jakub
Abstract--This paper introduces new attention-based convolutional neural networks for selecting bands from hyperspectral images. The proposed approach reuses convolutional activations at different depths, identifying the most informative regions of the spectrum with the help of gating mechanisms. Our attention techniques are modular and easy to implement, and they can be seamlessly trained end-to-end using gradient descent. Our rigorous experiments showed that deep models equipped with the attention mechanism deliver high-quality classification, and repeatedly identify significant bands in the training data, permitting the creation of refined and extremely compact sets that retain the most meaningful features. Hyperspectral data's high dimensionality is an important challenge towards its accurate segmentation, efficient analysis, transfer and storage.