Not enough data to create a plot.
Try a different view from the menu above.
Subramanian, Shashank
Comprehensive Performance Modeling and System Design Insights for Foundation Models
Subramanian, Shashank, Rrapaj, Ermal, Harrington, Peter, Chheda, Smeet, Farrell, Steven, Austin, Brian, Williams, Samuel, Wright, Nicholas, Bhimji, Wahid
Generative AI, in particular large transformer models, are increasingly driving HPC system design in science and industry. We analyze performance characteristics of such transformer models and discuss their sensitivity to the transformer type, parallelization strategy, and HPC system features (accelerators and interconnects). We utilize a performance model that allows us to explore this complex design space and highlight its key components. We find that different transformer types demand different parallelism and system characteristics at different training regimes. Large Language Models are performant with 3D parallelism and amplify network needs only at pre-training scales with reduced dependence on accelerator capacity and bandwidth. On the other hand, long-sequence transformers, representative of scientific foundation models, place a more uniform dependence on network and capacity with necessary 4D parallelism. Our analysis emphasizes the need for closer performance modeling of different transformer types keeping system features in mind and demonstrates a path towards this. Our code is available as open-source.
Analyzing and Exploring Training Recipes for Large-Scale Transformer-Based Weather Prediction
Willard, Jared D., Harrington, Peter, Subramanian, Shashank, Mahesh, Ankur, O'Brien, Travis A., Collins, William D.
The rapid rise of deep learning (DL) in numerical weather prediction (NWP) has led to a proliferation of models which forecast atmospheric variables with comparable or superior skill than traditional physics-based NWP. However, among these leading DL models, there is a wide variance in both the training settings and architecture used. Further, the lack of thorough ablation studies makes it hard to discern which components are most critical to success. In this work, we show that it is possible to attain high forecast skill even with relatively off-the-shelf architectures, simple training procedures, and moderate compute budgets. Specifically, we train a minimally modified SwinV2 transformer on ERA5 data, and find that it attains superior forecast skill when compared against IFS. We present some ablations on key aspects of the training pipeline, exploring different loss functions, model sizes and depths, and multi-step fine-tuning to investigate their effect. We also examine the model performance with metrics beyond the typical ACC and RMSE, and investigate how the performance scales with model size.
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
Chen, Wuyang, Song, Jialin, Ren, Pu, Subramanian, Shashank, Morozov, Dmitriy, Mahoney, Michael W.
Recent years have witnessed the promise of coupling machine learning methods and physical domain-specific insight for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining and in-context learning methods for PDE operator learning. To reduce the need for training data with simulated solutions, we pretrain neural operators on unlabeled PDE data using reconstruction-based proxy tasks. To improve out-of-distribution performance, we further assist neural operators in flexibly leveraging in-context learning methods, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDEs demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models.
Towards Stability of Autoregressive Neural Operators
McCabe, Michael, Harrington, Peter, Subramanian, Shashank, Brown, Jed
Neural operators have proven to be a promising approach for modeling spatiotemporal systems in the physical sciences. However, training these models for large systems can be quite challenging as they incur significant computational and memory expense -- these systems are often forced to rely on autoregressive time-stepping of the neural network to predict future temporal states. While this is effective in managing costs, it can lead to uncontrolled error growth over time and eventual instability. We analyze the sources of this autoregressive error growth using prototypical neural operator models for physical systems and explore ways to mitigate it. We introduce architectural and application-specific improvements that allow for careful control of instability-inducing operations within these models without inflating the compute/memory expense. We present results on several scientific systems that include Navier-Stokes fluid flow, rotating shallow water, and a high-resolution global weather forecasting system. We demonstrate that applying our design principles to neural operators leads to significantly lower errors for long-term forecasts as well as longer time horizons without qualitative signs of divergence compared to the original models for these systems. We open-source our \href{https://github.com/mikemccabe210/stabilizing_neural_operators}{code} for reproducibility.
Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior
Subramanian, Shashank, Harrington, Peter, Keutzer, Kurt, Bhimji, Wahid, Morozov, Dmitriy, Mahoney, Michael, Gholami, Amir
Pre-trained machine learning (ML) models have shown great performance for a wide range of applications, in particular in natural language processing (NLP) and computer vision (CV). Here, we study how pre-training could be used for scientific machine learning (SciML) applications, specifically in the context of transfer learning. We study the transfer behavior of these models as (i) the pre-trained model size is scaled, (ii) the downstream training dataset size is scaled, (iii) the physics parameters are systematically pushed out of distribution, and (iv) how a single model pre-trained on a mixture of different physics problems can be adapted to various downstream applications. We find that-when fine-tuned appropriately-transfer learning can help reach desired accuracy levels with orders of magnitude fewer downstream examples (across different tasks that can even be out-of-distribution) than training from scratch, with consistent behavior across a wide range of downstream examples. We also find that fine-tuning these models yields more performance gains as model size increases, compared to training from scratch on new downstream tasks. These results hold for a broad range of PDE learning tasks. All in all, our results demonstrate the potential of the "pre-train and fine-tune" paradigm for SciML problems, demonstrating a path towards building SciML foundation models. We open-source our code for reproducibility.
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Bakas, Spyridon, Reyes, Mauricio, Jakab, Andras, Bauer, Stefan, Rempfler, Markus, Crimi, Alessandro, Shinohara, Russell Takeshi, Berger, Christoph, Ha, Sung Min, Rozycki, Martin, Prastawa, Marcel, Alberts, Esther, Lipkova, Jana, Freymann, John, Kirby, Justin, Bilello, Michel, Fathallah-Shaykh, Hassan, Wiest, Roland, Kirschke, Jan, Wiestler, Benedikt, Colen, Rivka, Kotrotsou, Aikaterini, Lamontagne, Pamela, Marcus, Daniel, Milchenko, Mikhail, Nazeri, Arash, Weber, Marc-Andre, Mahajan, Abhishek, Baid, Ujjwal, Kwon, Dongjin, Agarwal, Manu, Alam, Mahbubul, Albiol, Alberto, Albiol, Antonio, Alex, Varghese, Tran, Tuan Anh, Arbel, Tal, Avery, Aaron, B., Pranjal, Banerjee, Subhashis, Batchelder, Thomas, Batmanghelich, Kayhan, Battistella, Enzo, Bendszus, Martin, Benson, Eze, Bernal, Jose, Biros, George, Cabezas, Mariano, Chandra, Siddhartha, Chang, Yi-Ju, Chazalon, Joseph, Chen, Shengcong, Chen, Wei, Chen, Jefferson, Cheng, Kun, Christoph, Meinel, Chylla, Roger, Clérigues, Albert, Costa, Anthony, Cui, Xiaomeng, Dai, Zhenzhen, Dai, Lutao, Deutsch, Eric, Ding, Changxing, Dong, Chao, Dudzik, Wojciech, Estienne, Théo, Shin, Hyung Eun, Everson, Richard, Fabrizio, Jonathan, Fang, Longwei, Feng, Xue, Fidon, Lucas, Fridman, Naomi, Fu, Huan, Fuentes, David, Gering, David G, Gao, Yaozong, Gates, Evan, Gholami, Amir, Gong, Mingming, González-Villá, Sandra, Pauloski, J. Gregory, Guan, Yuanfang, Guo, Sheng, Gupta, Sudeep, Thakur, Meenakshi H, Maier-Hein, Klaus H., Han, Woo-Sup, He, Huiguang, Hernández-Sabaté, Aura, Herrmann, Evelyn, Himthani, Naveen, Hsu, Winston, Hsu, Cheyu, Hu, Xiaojun, Hu, Xiaobin, Hu, Yan, Hu, Yifan, Hua, Rui, Huang, Teng-Yi, Huang, Weilin, Huo, Quan, HV, Vivek, Isensee, Fabian, Islam, Mobarakol, Albiol, Francisco J., Wang, Chiatse J., Jambawalikar, Sachin, Jose, V Jeya Maria, Jian, Weijian, Jin, Peter, Jungo, Alain, Nuechterlein, Nicholas K, Kao, Po-Yu, Kermi, Adel, Keutzer, Kurt, Khened, Mahendra, Kickingereder, Philipp, King, Nik, Knapp, Haley, Knecht, Urspeter, Kohli, Lisa, Kong, Deren, Kong, Xiangmao, Koppers, Simon, Kori, Avinash, Krishnamurthi, Ganapathy, Kumar, Piyush, Kushibar, Kaisar, Lachinov, Dmitrii, Lee, Joon, Lee, Chengen, Lee, Yuehchou, Lefkovits, Szidonia, Lefkovits, Laszlo, Li, Tengfei, Li, Hongwei, Li, Wenqi, Li, Hongyang, Li, Xiaochuan, Lin, Zheng-Shen, Lin, Fengming, Liu, Chang, Liu, Boqiang, Liu, Xiang, Liu, Mingyuan, Liu, Ju, Lladó, Xavier, Luo, Lin, Iftekharuddin, Khan M., Tsai, Yuhsiang M., Ma, Jun, Ma, Kai, Mackie, Thomas, Mahmoudi, Issam, Marcinkiewicz, Michal, McKinley, Richard, Mehta, Sachin, Mehta, Raghav, Meier, Raphael, Merhof, Dorit, Meyer, Craig, Mitra, Sushmita, Moiyadi, Aliasgar, Mrukwa, Grzegorz, Monteiro, Miguel A. B., Myronenko, Andriy, Carver, Eric N, Nalepa, Jakub, Ngo, Thuyen, Niu, Chen, Oermann, Eric, Oliveira, Arlindo, Oliver, Arnau, Ourselin, Sebastien, French, Andrew P., Pound, Michael P., Pridmore, Tony P., Serrano-Rubio, Juan Pablo, Paragios, Nikos, Paschke, Brad, Pei, Linmim, Peng, Suting, Pham, Bao, Piella, Gemma, Pillai, G. N., Piraud, Marie, Popli, Anmol, Prčkovska, Vesna, Puch, Santi, Puybareau, Élodie, Qiao, Xu, Suter, Yannick R, Scott, Matthew R., Rane, Swapnil, Rebsamen, Michael, Ren, Hongliang, Ren, Xuhua, Rezaei, Mina, Lorenzo, Pablo Ribalta, Rippel, Oliver, Robert, Charlotte, Choudhury, Ahana Roy, Jackson, Aaron S., Manjunath, B. S., Salem, Mostafa, Salvi, Joaquim, Sánchez, Irina, Schellingerhout, Dawid, Shboul, Zeina, Shen, Haipeng, Shen, Dinggang, Shenoy, Varun, Shi, Feng, Shu, Hai, Snyder, James, Han, Il Song, Soni, Mehul, Stawiaski, Jean, Subramanian, Shashank, Sun, Li, Sun, Roger, Sun, Jiawei, Sun, Kay, Sun, Yu, Sun, Guoxia, Sun, Shuang, Park, Moo Sung, Szilagyi, Laszlo, Talbar, Sanjay, Tao, Dacheng, Tao, Dacheng, Khadir, Mohamed Tarek, Thakur, Siddhesh, Tochon, Guillaume, Tran, Tuan, Tseng, Kuan-Lun, Turlapov, Vadim, Tustison, Nicholas, Shankar, B. Uma, Vakalopoulou, Maria, Valverde, Sergi, Vanguri, Rami, Vasiliev, Evgeny, Vercauteren, Tom, Vidyaratne, Lasitha, Vivekanandan, Ajeet, Wang, Guotai, Wang, Qian, Wang, Weichung, Wen, Ning, Wen, Xin, Weninger, Leon, Wick, Wolfgang, Wu, Shaocheng, Wu, Qiang, Xia, Yong, Xu, Yanwu, Xu, Xiaowen, Xu, Peiyuan, Yang, Tsai-Ling, Yang, Xiaoping, Yang, Hao-Yu, Yang, Junlin, Yang, Haojin, Yao, Hongdou, Young-Moxon, Brett, Yue, Xiangyu, Zhang, Songtao, Zhang, Angela, Zhang, Kun, Zhang, Xuejie, Zhang, Lichi, Zhang, Xiaoyue, Zhao, Sicheng, Zhao, Yu, Zheng, Yefeng, Zhong, Liming, Zhou, Chenhong, Zhou, Xiaobing, Zhu, Hongtu, Zong, Weiwei, Kalpathy-Cramer, Jayashree, Farahani, Keyvan, Davatzikos, Christos, van Leemput, Koen, Menze, Bjoern
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
A Novel Domain Adaptation Framework for Medical Image Segmentation
Gholami, Amir, Subramanian, Shashank, Shenoy, Varun, Himthani, Naveen, Yue, Xiangyu, Zhao, Sicheng, Jin, Peter, Biros, George, Keutzer, Kurt
We propose a segmentation framework that uses deep neural networks and introduce two innovations. First, we describe a biophysics-based domain adaptation method. Second, we propose an automatic method to segment white and gray matter, and cerebrospinal fluid, in addition to tumorous tissue. Regarding our first innovation, we use a domain adaptation framework that combines a novel multispecies biophysical tumor growth model with a generative adversarial model to create realistic looking synthetic multimodal MR images with known segmentation. Regarding our second innovation, we propose an automatic approach to enrich available segmentation data by computing the segmentation for healthy tissues. This segmentation, which is done using diffeomorphic image registration between the BraTS training data and a set of prelabeled atlases, provides more information for training and reduces the class imbalance problem. Our overall approach is not specific to any particular neural network and can be used in conjunction with existing solutions. We demonstrate the performance improvement using a 2D U-Net for the BraTS'18 segmentation challenge. Our biophysics based domain adaptation achieves better results, as compared to the existing state-of-the-art GAN model used to create synthetic data for training.