movability
Pushing Through Clutter With Movability Awareness of Blocking Obstacles
Weeda, Joris J., Bakker, Saray, Chen, Gang, Alonso-Mora, Javier
-- Navigation Among Movable Obstacles (NAMO) poses a challenge for traditional path-planning methods when obstacles block the path, requiring push actions to reach the goal. We propose a framework that enables movability-aware planning to overcome this challenge without relying on explicit obstacle placement. A physics engine is adopted to simulate the interaction result of the rollouts with the environment, and generate trajectories that minimize contact force. In qualitative and quantitative experiments, SVG-MPPI outperforms the existing paradigm that uses only binary movability for planning, achieving higher success rates with reduced cumulative contact forces. Our code is available at: https://github.com/tud-amr/SVG-MPPI I. INTRODUCTION A fundamental ability of autonomous robots is to navigate towards a goal while avoiding collisions along the way [1]. However, in complex and cluttered environments, such as domestic settings where obstacles like chairs and boxes may obstruct the path to the goal, finding collision-free paths often becomes impractical. In such cases, traditional navigation methods often fail and Navigation Amongst Movable Obstacles (NAMO) becomes essential.
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
Causal Reinforcement Learning for Optimisation of Robot Dynamics in Unknown Environments
Dcruz, Julian Gerald, Mahoney, Sam, Chua, Jia Yun, Soukhabandith, Adoundeth, Mugabe, John, Guo, Weisi, Arana-Catania, Miguel
Autonomous operations of robots in unknown environments are challenging due to the lack of knowledge of the dynamics of the interactions, such as the objects' movability. This work introduces a novel Causal Reinforcement Learning approach to enhancing robotics operations and applies it to an urban search and rescue (SAR) scenario. Our proposed machine learning architecture enables robots to learn the causal relationships between the visual characteristics of the objects, such as texture and shape, and the objects' dynamics upon interaction, such as their movability, significantly improving their decision-making processes. We conducted causal discovery and RL experiments demonstrating the Causal RL's superior performance, showing a notable reduction in learning times by over 24.5% in complex situations, compared to non-causal models.
Towards Robot-Centric Conceptual Knowledge Acquisition
Jäger, Georg, Mueller, Christian A., Thosar, Madhura, Zug, Sebastian, Birk, Andreas
Robots require knowledge about objects in order to efficiently perform various household tasks involving objects. The existing knowledge bases for robots acquire symbolic knowledge about objects from manually-coded external common sense knowledge bases such as ConceptNet, Word-Net etc. The problem with such approaches is the discrepancy between human-centric symbolic knowledge and robot-centric object perception due to its limited perception capabilities. Ultimately, significant portion of knowledge in the knowledge base remains ungrounded into robot's perception. To overcome this discrepancy, we propose an approach to enable robots to generate robot-centric symbolic knowledge about objects from their own sensory data, thus, allowing them to assemble their own conceptual understanding of objects. With this goal in mind, the presented paper elaborates on the work-in-progress of the proposed approach followed by the preliminary results.