Jayanthi, Sravan
Synthetic Multimodal Question Generation
Wu, Ian, Jayanthi, Sravan, Viswanathan, Vijay, Rosenberg, Simon, Pakazad, Sina, Wu, Tongshuang, Neubig, Graham
Multimodal Retrieval Augmented Generation (MMRAG) is a powerful approach to questionanswering over multimodal documents. A key challenge with evaluating MMRAG is the paucity of high-quality datasets matching the question styles and modalities of interest. In light of this, we propose SMMQG, a synthetic data generation framework. SMMQG leverages interplay between a retriever, large language model (LLM) and large multimodal model (LMM) to generate question and answer pairs directly from multimodal documents, with the questions conforming to specified styles and modalities. We use SMMQG to generate an MMRAG dataset of 1024 questions Figure 1: An overview of SMMQG. Given userprovided over Wikipedia documents and evaluate stateof-the-art question style and modality requirements, SMmodels using it, revealing insights MQG selects question sources and produces questions into model performance that are attainable only and answers. The questions are grounded in the selected through style-and modality-specific evaluation question sources, and adhere to the question and modality data. Next, we measure the quality of data produced requirements.
Fast Lifelong Adaptive Inverse Reinforcement Learning from Demonstrations
Chen, Letian, Jayanthi, Sravan, Paleja, Rohan, Martin, Daniel, Zakharov, Viacheslav, Gombolay, Matthew
Learning from Demonstration (LfD) approaches empower end-users to teach robots novel tasks via demonstrations of the desired behaviors, democratizing access to robotics. However, current LfD frameworks are not capable of fast adaptation to heterogeneous human demonstrations nor the large-scale deployment in ubiquitous robotics applications. In this paper, we propose a novel LfD framework, Fast Lifelong Adaptive Inverse Reinforcement learning (FLAIR). Our approach (1) leverages learned strategies to construct policy mixtures for fast adaptation to new demonstrations, allowing for quick end-user personalization, (2) distills common knowledge across demonstrations, achieving accurate task inference; and (3) expands its model only when needed in lifelong deployments, maintaining a concise set of prototypical strategies that can approximate all behaviors via policy mixtures. We empirically validate that FLAIR achieves adaptability (i.e., the robot adapts to heterogeneous, user-specific task preferences), efficiency (i.e., the robot achieves sample-efficient adaptation), and scalability (i.e., the model grows sublinearly with the number of demonstrations while maintaining high performance). FLAIR surpasses benchmarks across three control tasks with an average 57% improvement in policy returns and an average 78% fewer episodes required for demonstration modeling using policy mixtures. Finally, we demonstrate the success of FLAIR in a table tennis task and find users rate FLAIR as having higher task (p<.05) and personalization (p<.05) performance.