Goto

Collaborating Authors

 Vaskevicius, Narunas


GraphEQA: Using 3D Semantic Scene Graphs for Real-time Embodied Question Answering

arXiv.org Artificial Intelligence

For example, to answer explore and develop a semantic understanding of an unseen the question "How many chairs are there at the dining environment in order to answer a situated question table?", the agent might rely on commonsense knowledge with confidence. This remains a challenging problem in to understand that dining tables are often associated with robotics, due to the difficulties in obtaining useful semantic dining rooms and dining rooms are usually near the kitchen representations, updating these representations online, and towards the back of a home. A reasonable navigation strategy leveraging prior world knowledge for efficient exploration would involve navigating to the back of the house to and planning. Aiming to address these limitations, we propose locate a kitchen. To ground this search in the current environment, GraphEQA, a novel approach that utilizes real-time however, requires the agent to continually maintain 3D metric-semantic scene graphs (3DSGs) and task relevant an understanding of where it is, memory of where it images as multi-modal memory for grounding Vision-has been, and what further exploratory actions will lead it Language Models (VLMs) to perform EQA tasks in unseen to relevant regions. Finally, the agent needs to observe the environments. We employ a hierarchical planning approach target object(s) and perform visual grounding, in order to that exploits the hierarchical nature of 3DSGs for structured reason about the number of chairs around the dining table, planning and semantic-guided exploration. Through experiments and confidently answer the question correctly.


The Surprising Ineffectiveness of Pre-Trained Visual Representations for Model-Based Reinforcement Learning

arXiv.org Artificial Intelligence

Visual Reinforcement Learning (RL) methods often require extensive amounts of data. As opposed to model-free RL, model-based RL (MBRL) offers a potential solution with efficient data utilization through planning. Additionally, RL lacks generalization capabilities for real-world tasks. Prior work has shown that incorporating pre-trained visual representations (PVRs) enhances sample efficiency and generalization. While PVRs have been extensively studied in the context of model-free RL, their potential in MBRL remains largely unexplored. In this paper, we benchmark a set of PVRs on challenging control tasks in a model-based RL setting. We investigate the data efficiency, generalization capabilities, and the impact of different properties of PVRs on the performance of model-based agents. Our results, perhaps surprisingly, reveal that for MBRL current PVRs are not more sample efficient than learning representations from scratch, and that they do not generalize better to out-of-distribution (OOD) settings. To explain this, we analyze the quality of the trained dynamics model. Furthermore, we show that data diversity and network architecture are the most important contributors to OOD generalization performance.