Ou, Rong
Training Video Foundation Models with NVIDIA NeMo
Patel, Zeeshan, He, Ethan, Mannan, Parth, Ren, Xiaowei, Wolf, Ryan, Agarwal, Niket, Huffman, Jacob, Wang, Zhuoyao, Wang, Carl, Chang, Jack, Bai, Yan, Huang, Tommy, Wang, Linnan, Jain, Sahil, Ramasamy, Shanmugam, Jennings, Joseph, Sirazitdinova, Ekaterina, Sudakov, Oleg, Ma, Mingyuan, Chen, Bobby, Lin, Forrest, Wang, Hao, Sabavat, Vasanth Rao Naik, Niverty, Sriharsha, Ou, Rong, Bhattacharya, Pallab, Page, David, Tajbakhsh, Nima, Aithal, Ashwath
Video Foundation Models (VFMs) have recently been used to simulate the real world to train physical AI systems and develop creative visual experiences. However, there are significant challenges in training large-scale, high quality VFMs that can generate high-quality videos. We present a scalable, open-source VFM training pipeline with NVIDIA NeMo, providing accelerated video dataset curation, multimodal data loading, and parallelized video diffusion model training and inference. We also provide a comprehensive performance analysis highlighting best practices for efficient VFM training and inference.
Out-of-Core GPU Gradient Boosting
Ou, Rong
GPU-based algorithms have greatly accelerated many machine learning methods; however, GPU memory is typically smaller than main memory, limiting the size of training data. In this paper, we describe an out-of-core GPU gradient boosting algorithm implemented in the XGBoost library. We show that much larger datasets can fit on a given GPU, without degrading model accuracy or training time. To the best of our knowledge, this is the first out-of-core GPU implementation of gradient boosting. Similar approaches can be applied to other machine learning algorithms