Jain, Sahil
Training Video Foundation Models with NVIDIA NeMo
Patel, Zeeshan, He, Ethan, Mannan, Parth, Ren, Xiaowei, Wolf, Ryan, Agarwal, Niket, Huffman, Jacob, Wang, Zhuoyao, Wang, Carl, Chang, Jack, Bai, Yan, Huang, Tommy, Wang, Linnan, Jain, Sahil, Ramasamy, Shanmugam, Jennings, Joseph, Sirazitdinova, Ekaterina, Sudakov, Oleg, Ma, Mingyuan, Chen, Bobby, Lin, Forrest, Wang, Hao, Sabavat, Vasanth Rao Naik, Niverty, Sriharsha, Ou, Rong, Bhattacharya, Pallab, Page, David, Tajbakhsh, Nima, Aithal, Ashwath
Video Foundation Models (VFMs) have recently been used to simulate the real world to train physical AI systems and develop creative visual experiences. However, there are significant challenges in training large-scale, high quality VFMs that can generate high-quality videos. We present a scalable, open-source VFM training pipeline with NVIDIA NeMo, providing accelerated video dataset curation, multimodal data loading, and parallelized video diffusion model training and inference. We also provide a comprehensive performance analysis highlighting best practices for efficient VFM training and inference.
Prosody as a Teaching Signal for Agent Learning: Exploratory Studies and Algorithmic Implications
Knierim, Matilda, Jain, Sahil, Aydoฤan, Murat Han, Mitra, Kenneth, Desai, Kush, Saran, Akanksha, Baraka, Kim
Agent learning from human interaction often relies on explicit signals, but implicit social cues, such as prosody in speech, could provide valuable information for more effective learning. This paper advocates for the integration of prosody as a teaching signal to enhance agent learning from human teachers. Through two exploratory studies--one examining voice feedback in an interactive reinforcement learning setup and the other analyzing restricted audio from human demonstrations in three Atari games--we demonstrate that prosody carries significant information about task dynamics. Our findings suggest that prosodic features, when coupled with explicit feedback, can enhance reinforcement learning outcomes. Moreover, we propose guidelines for prosody-sensitive algorithm design and discuss insights into teaching behavior. Our work underscores the potential of leveraging prosody as an implicit signal for more efficient agent learning, thus advancing human-agent interaction paradigms.
NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Shen, Gerald, Wang, Zhilin, Delalleau, Olivier, Zeng, Jiaqi, Dong, Yi, Egert, Daniel, Sun, Shengyang, Zhang, Jimmy, Jain, Sahil, Taghibakhshi, Ali, Ausin, Markel Sanz, Aithal, Ashwath, Kuchaiev, Oleksii
Aligning Large Language Models (LLMs) with human values and preferences is essential for making them helpful and safe. However, building efficient tools to perform alignment can be challenging, especially for the largest and most competent LLMs which often contain tens or hundreds of billions of parameters. We create NeMo-Aligner, a toolkit for model alignment that can efficiently scale to using hundreds of GPUs for training. NeMo-Aligner comes with highly optimized and scalable implementations for major paradigms of model alignment such as: Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), SteerLM, and Self-Play Fine-Tuning (SPIN). Additionally, our toolkit supports running most of the alignment techniques in a Parameter Efficient Fine-Tuning (PEFT) setting. NeMo-Aligner is designed for extensibility, allowing support for other alignment techniques with minimal effort.