Civera, Javier
S-Graphs+: Real-time Localization and Mapping leveraging Hierarchical Representations
Bavle, Hriday, Sanchez-Lopez, Jose Luis, Shaheer, Muhammad, Civera, Javier, Voos, Holger
In this paper, we present an evolved version of Situational Graphs, which jointly models in a single optimizable factor graph (1) a pose graph, as a set of robot keyframes comprising associated measurements and robot poses, and (2) a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between them. Specifically, our S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robots pose and its map, simultaneously constructing and leveraging high-level information of the environment. To extract this high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets, including simulated and real data of indoor environments from varying construction sites, and on a real public dataset of several indoor office areas. On average over our datasets, S-Graphs+ outperforms the accuracy of the second-best method by a margin of 10.67%, while extending the robot situational awareness by a richer scene model. Moreover, we make the software available as a docker file.
Graph-based Global Robot Simultaneous Localization and Mapping using Architectural Plans
Shaheer, Muhammad, Millan-Romera, Jose Andres, Bavle, Hriday, Sanchez-Lopez, Jose Luis, Civera, Javier, Voos, Holger
In this paper, we propose a solution for graph-based global robot simultaneous localization and mapping (SLAM) using architectural plans. Before the start of the robot operation, the previously available architectural plan of the building is converted into our proposed architectural graph (A-Graph). When the robot starts its operation, it uses its onboard LIDAR and odometry to carry out an online SLAM relying on our situational graph (S-Graph), which includes both, a representation of the environment with multiple levels of abstractions, such as walls or rooms, and their relationships, as well as the robot poses with their associated keyframes. Our novel graph-to-graph matching method is used to relate the aforementioned S-Graph and A-Graph, which are aligned and merged, resulting in our novel informed Situational Graph (iS-Graph). Our iS-Graph not only provides graph-based global robot localization, but it extends the graph-based SLAM capabilities of the S-Graph by incorporating into it the prior knowledge of the environment existing in the architectural plan
Graph-based Global Robot Localization Informing Situational Graphs with Architectural Graphs
Shaheer, Muhammad, Millan-Romera, Jose Andres, Bavle, Hriday, Sanchez-Lopez, Jose Luis, Civera, Javier, Voos, Holger
In this paper, we propose a solution for legged robot localization using architectural plans. Our specific contributions towards this goal are several. Firstly, we develop a method for converting the plan of a building into what we denote as an architectural graph (A-Graph). When the robot starts moving in an environment, we assume it has no knowledge about it, and it estimates an online situational graph representation (S-Graph) of its surroundings. We develop a novel graph-to-graph matching method, in order to relate the S-Graph estimated online from the robot sensors and the A-Graph extracted from the building plans. Note the challenge in this, as the S-Graph may show a partial view of the full A-Graph, their nodes are heterogeneous and their reference frames are different. After the matching, both graphs are aligned and merged, resulting in what we denote as an informed Situational Graph (iS-Graph), with which we achieve global robot localization and exploitation of prior knowledge from the building plans. Our experiments show that our pipeline shows a higher robustness and a significantly lower pose error than several LiDAR localization baselines.
Advanced Situational Graphs for Robot Navigation in Structured Indoor Environments
Bavle, Hriday, Sanchez-Lopez, Jose Luis, Shaheer, Muhammad, Civera, Javier, Voos, Holger
Mobile robots extract information from its environment to understand their current situation to enable intelligent decision making and autonomous task execution. In our previous work, we introduced the concept of Situation Graphs (S-Graphs) which combines in a single optimizable graph, the robot keyframes and the representation of the environment with geometric, semantic and topological abstractions. Although S-Graphs were built and optimized in real-time and demonstrated state-of-the-art results, they are limited to specific structured environments with specific hand-tuned dimensions of rooms and corridors. In this work, we present an advanced version of the Situational Graphs (S-Graphs+), consisting of the five layered optimizable graph that includes (1) metric layer along with the graph of free-space clusters (2) keyframe layer where the robot poses are registered (3) metric-semantic layer consisting of the extracted planar walls (4) novel rooms layer constraining the extracted planar walls (5) novel floors layer encompassing the rooms within a given floor level. S-Graphs+ demonstrates improved performance over S-Graphs efficiently extracting the room information while simultaneously improving the pose estimate of the robot, thus extending the robots situational awareness in the form of a five layered environmental model.
Grounding Acoustic Echoes in Single View Geometry Estimation
Hussain, Muhammad Wajahat (University of Zaragoza) | Civera, Javier (University of Zaragoza) | Montano, Luis (Universidad de Zaragoza)
Extracting the 3D geometry plays an important part in scene understanding. Recently, robust visual descriptors are proposed for extracting the indoor scene layout from a passive agent’s perspective, specifically from a single image. Their robustness is mainly due to modelling the physical interaction of the underlying room geometry with the objects and the humans present in the room. In this work we add the physical constraints coming from acoustic echoes, generated by an audio source, to this visual model. Our audio-visual 3D geometry descriptor improves over the state of the art in passive perception models as we show in our experiments.