Kalbarczyk, Zbigniew T.
Characterizing GPU Resilience and Impact on AI/HPC Systems
Cui, Shengkun, Patke, Archit, Chen, Ziheng, Ranjan, Aditya, Nguyen, Hung, Cao, Phuong, Jha, Saurabh, Bode, Brett, Bauer, Gregory, Narayanaswami, Chandra, Sow, Daby, Di Martino, Catello, Kalbarczyk, Zbigniew T., Iyer, Ravishankar K.
In this study, we characterize GPU failures in Delta, the current large-scale AI system with over 600 petaflops of peak compute throughput. The system comprises GPU and non-GPU nodes with modern AI accelerators, such as NVIDIA A40, A100, and H100 GPUs. The study uses two and a half years of data on GPU errors. We evaluate the resilience of GPU hardware components to determine the vulnerability of different GPU components to failure and their impact on the GPU and node availability. We measure the key propagation paths in GPU hardware, GPU interconnect (NVLink), and GPU memory. Finally, we evaluate the impact of the observed GPU errors on user jobs. Our key findings are: (i) Contrary to common beliefs, GPU memory is over 30x more reliable than GPU hardware in terms of MTBE (mean time between errors). (ii) The newly introduced GSP (GPU System Processor) is the most vulnerable GPU hardware component. (iii) NVLink errors did not always lead to user job failure, and we attribute it to the underlying error detection and retry mechanisms employed. (iv) We show multiple examples of hardware errors originating from one of the key GPU hardware components, leading to application failure. (v) We project the impact of GPU node availability on larger scales with emulation and find that significant overprovisioning between 5-20% would be necessary to handle GPU failures. If GPU availability were improved to 99.9%, the overprovisioning would be reduced by 4x.
Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction
Qiu, Haoran, Mao, Weichao, Patke, Archit, Cui, Shengkun, Jha, Saurabh, Wang, Chen, Franke, Hubertus, Kalbarczyk, Zbigniew T., Baลar, Tamer, Iyer, Ravishankar K.
Large language models (LLMs) have been driving a new wave of interactive AI applications across numerous domains. However, efficiently serving LLM inference requests is challenging due to their unpredictable execution times originating from the autoregressive nature of generative models. Existing LLM serving systems exploit first-come-first-serve (FCFS) scheduling, suffering from head-of-line blocking issues. To address the non-deterministic nature of LLMs and enable efficient interactive LLM serving, we present a speculative shortest-job-first (SSJF) scheduler that uses a light proxy model to predict LLM output sequence lengths. Our open-source SSJF implementation does not require changes to memory management or batching strategies. Evaluations on real-world datasets and production workload traces show that SSJF reduces average job completion times by 30.5-39.6% and increases throughput by 2.2-3.6x compared to FCFS schedulers, across no batching, dynamic batching, and continuous batching settings.
BayesPerf: Minimizing Performance Monitoring Errors Using Bayesian Statistics
Banerjee, Subho S., Jha, Saurabh, Kalbarczyk, Zbigniew T., Iyer, Ravishankar K.
Hardware performance counters (HPCs) that measure low-level architectural and microarchitectural events provide dynamic contextual information about the state of the system. However, HPC measurements are error-prone due to non determinism (e.g., undercounting due to event multiplexing, or OS interrupt-handling behaviors). In this paper, we present BayesPerf, a system for quantifying uncertainty in HPC measurements by using a domain-driven Bayesian model that captures microarchitectural relationships between HPCs to jointly infer their values as probability distributions. We provide the design and implementation of an accelerator that allows for low-latency and low-power inference of the BayesPerf model for x86 and ppc64 CPUs. BayesPerf reduces the average error in HPC measurements from 40.1% to 7.6% when events are being multiplexed. The value of BayesPerf in real-time decision-making is illustrated with a simple example of scheduling of PCIe transfers.
ML-based Fault Injection for Autonomous Vehicles: A Case for Bayesian Fault Injection
Jha, Saurabh, Banerjee, Subho S., Tsai, Timothy, Hari, Siva K. S., Sullivan, Michael B., Kalbarczyk, Zbigniew T., Keckler, Stephen W., Iyer, Ravishankar K.
Items (a), (b), and (c) are integrated into a intelligence (AI) and machine learning (ML) to integrate Bayesian network (BN). BNs provide a favorable formalism mechanical, electronic, and computing technologies to make in which to model the propagation of faults across AV system real-time driving decisions. AI enables AVs to navigate through components with an interpretable model. The model, together complex environments while maintaining a safety envelope [1], with fault injection results, can be used to design and assess [2] that is continuously measured and quantified by onboard the safety of AVs. Further, BNs enable rapid probabilistic sensors (e.g., camera, LiDAR, RADAR) [3]-[5]. Clearly, the inference, which allows DriveFI to quickly find safety-critical safety and resilience of AVs are of significant concern, as faults. The Bayesian FI framework can be extended to other exemplified by several headline-making AV crashes [6], [7], safety-critical systems (e.g., surgical robots). The framework as well as prior work characterizing AV resilience during road requires specification of the safety constraints and the system tests [8]. Hence there is a compelling need for a comprehensive software architecture to model causal relationship between assessment of AV technology.