Goto

Collaborating Authors

 Dash, Sajal


Scalable Artificial Intelligence for Science: Perspectives, Methods and Exemplars

arXiv.org Artificial Intelligence

In a post-ChatGPT world, this paper explores the potential of leveraging scalable artificial intelligence for scientific discovery. We propose that scaling up artificial intelligence on high-performance computing platforms is essential to address such complex problems. This perspective focuses on scientific use cases like cognitive simulations, large language models for scientific inquiry, medical image analysis, and physics-informed approaches. The study outlines the methodologies needed to address such challenges at scale on supercomputers or the cloud and provides exemplars of such approaches applied to solve a variety of scientific problems. In light of ChatGPT's growing popularity, the transformative potential of AI in science becomes increasingly evident. Although a number of recent articles highlight the transformative power of AI in science [1, 2, 3], few provide specifics how to implement such methods at scale on supercomputers. Using ChatGPT as an archetype, we argue that the success of such complex AI models results from two primary advancements: (1) the development of the transformer architecture, (2) the ability to train on vast amounts of internet-scale data. This process represents a broader trend within the field of AI where combining massive amounts of training data with large-scale computational resources becomes the foundation of scientific breakthroughs. Several examples underscore the integral role of using large-scale computational resources and colossal amounts of data to achieve scientific breakthroughs. For instance, Khan et al. [4] used AI and large-scale computing for advanced models of black hole mergers, leveraging a dataset of 14 million waveforms on the Summit supercomputer. Riley et al. [5] made significant progress towards the understanding the physics of stratified fluid turbulence by being able to model the Prandtl number of seven, which represents ocean water at 20 Such simulations required being simulated using four trillion grid points, which required petabytes of storage [6].


Optimizing Distributed Training on Frontier for Large Language Models

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated remarkable success as foundational models, benefiting various downstream applications through fine-tuning. Recent studies on loss scaling have demonstrated the superior performance of larger LLMs compared to their smaller counterparts. Nevertheless, training LLMs with billions of parameters poses significant challenges and requires considerable computational resources. For example, training a one trillion parameter GPT-style model on 20 trillion tokens requires a staggering 120 million exaflops of computation. This research explores efficient distributed training strategies to extract this computation from Frontier, the world's first exascale supercomputer dedicated to open science. We enable and investigate various model and data parallel training techniques, such as tensor parallelism, pipeline parallelism, and sharded data parallelism, to facilitate training a trillion-parameter model on Frontier. We empirically assess these techniques and their associated parameters to determine their impact on memory footprint, communication latency, and GPU's computational efficiency. We analyze the complex interplay among these techniques and find a strategy to combine them to achieve high throughput through hyperparameter tuning. We have identified efficient strategies for training large LLMs of varying sizes through empirical analysis and hyperparameter tuning. For 22 Billion, 175 Billion, and 1 Trillion parameters, we achieved GPU throughputs of $38.38\%$, $36.14\%$, and $31.96\%$, respectively. For the training of the 175 Billion parameter model and the 1 Trillion parameter model, we achieved $100\%$ weak scaling efficiency on 1024 and 3072 MI250X GPUs, respectively. We also achieved strong scaling efficiencies of $89\%$ and $87\%$ for these two models.


DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies

arXiv.org Artificial Intelligence

In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences. This could herald a new era of scientific exploration, bringing significant advancements across sectors from drug development to renewable energy. To answer this call, we present DeepSpeed4Science initiative (deepspeed4science.ai) which aims to build unique capabilities through AI system technology innovations to help domain experts to unlock today's biggest science mysteries. By leveraging DeepSpeed's current technology pillars (training, inference and compression) as base technology enablers, DeepSpeed4Science will create a new set of AI system technologies tailored for accelerating scientific discoveries by addressing their unique complexity beyond the common technical approaches used for accelerating generic large language models (LLMs). In this paper, we showcase the early progress we made with DeepSpeed4Science in addressing two of the critical system challenges in structural biology research.