static map
FreeDOM: Online Dynamic Object Removal Framework for Static Map Construction Based on Conservative Free Space Estimation
Li, Chen, Li, Wanlei, Liu, Wenhao, Shu, Yixiang, Lou, Yunjiang
--Online map construction is essential for autonomous robots to navigate in unknown environments. However, the presence of dynamic objects may introduce artifacts into the map, which can significantly degrade the performance of localization and path planning. T o tackle this problem, a novel online dynamic object removal framework for static map construction based on conservative free space estimation (FreeDOM) is proposed, consisting of a scan-removal front-end and a map-refinement back-end. First, we propose a multi-resolution map structure for fast computation and effective map representation. In the scan-removal front-end, we employ raycast enhancement to improve free space estimation and segment the LiDAR scan based on the estimated free space. In the map-refinement back-end, we further eliminate residual dynamic objects in the map by leveraging incremental free space information. As experimentally verified on SemanticKITTI, HeLiMOS, and indoor datasets with various sensors, our proposed framework overcomes the limitations of visibility-based methods and outperforms state-of-the-art methods with an average F1-score improvement of 9.7%. NLINE construction of a clean static map is essential for localization, navigation, and exploration of autonomous robots in unknown environments.
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
TOSS: Real-time Tracking and Moving Object Segmentation for Static Scene Mapping
Jang, Seoyeon, Oh, Minho, Yu, Byeongho, Nahrendra, I Made Aswin, Lee, Seungjae, Lim, Hyungtae, Myung, Hyun
Safe navigation with simultaneous localization and mapping (SLAM) for autonomous robots is crucial in challenging environments. To achieve this goal, detecting moving objects in the surroundings and building a static map are essential. However, existing moving object segmentation methods have been developed separately for each field, making it challenging to perform real-time navigation and precise static map building simultaneously. In this paper, we propose an integrated real-time framework that combines online tracking-based moving object segmentation with static map building. For safe navigation, we introduce a computationally efficient hierarchical association cost matrix to enable real-time moving object segmentation. In the context of precise static mapping, we present a voting-based method, DS-Voting, designed to achieve accurate dynamic object removal and static object recovery by emphasizing their spatio-temporal differences. We evaluate our proposed method quantitatively and qualitatively in the SemanticKITTI dataset and real-world challenging environments. The results demonstrate that dynamic objects can be clearly distinguished and incorporated into static map construction, even in stairs, steep hills, and dense vegetation.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
No More Potentially Dynamic Objects: Static Point Cloud Map Generation based on 3D Object Detection and Ground Projection
Woo, Soojin, Jung, Donghwi, Kim, Seong-Woo
In this paper, we propose an algorithm to generate a static point cloud map based on LiDAR point cloud data. Our proposed pipeline detects dynamic objects using 3D object detectors and projects points of dynamic objects onto the ground. Typically, point cloud data acquired in real-time serves as a snapshot of the surrounding areas containing both static objects and dynamic objects. The static objects include buildings and trees, otherwise, the dynamic objects contain objects such as parked cars that change their position over time. Removing dynamic objects from the point cloud map is crucial as they can degrade the quality and localization accuracy of the map. To address this issue, in this paper, we propose an algorithm that creates a map only consisting of static objects. We apply a 3D object detection algorithm to the point cloud data which are obtained from LiDAR to implement our pipeline. We then stack the points to create the map after performing ground segmentation and projection. As a result, not only we can eliminate currently dynamic objects at the time of map generation but also potentially dynamic objects such as parked vehicles. We validate the performance of our method using two kinds of datasets collected on real roads: KITTI and our dataset. The result demonstrates the capability of our proposal to create an accurate static map excluding dynamic objects from input point clouds. Also, we verified the improved performance of localization using a generated map based on our method.
- Asia > South Korea > Seoul > Seoul (0.05)
- North America > United States > California > Alameda County > Berkeley (0.04)
S$^2$MAT: Simultaneous and Self-Reinforced Mapping and Tracking in Dynamic Urban Scenariosorcing Framework for Simultaneous Mapping and Tracking in Unbounded Urban Environments
Fan, Tingxiang, Shen, Bowen, Zhang, Yinqiang, Zhang, Chuye, Yang, Lei, Chen, Hua, Zhang, Wei, Pan, Jia
Despite the increasing prevalence of robots in daily life, their navigation capabilities are still limited to environments with prior knowledge, such as a global map. To fully unlock the potential of robots, it is crucial to enable them to navigate in large-scale unknown and changing unstructured scenarios. This requires the robot to construct an accurate static map in real-time as it explores, while filtering out moving objects to ensure mapping accuracy and, if possible, achieving high-quality pedestrian tracking and collision avoidance. While existing methods can achieve individual goals of spatial mapping or dynamic object detection and tracking, there has been limited research on effectively integrating these two tasks, which are actually coupled and reciprocal. In this work, we propose a solution called S$^2$MAT (Simultaneous and Self-Reinforced Mapping and Tracking) that integrates a front-end dynamic object detection and tracking module with a back-end static mapping module. S$^2$MAT leverages the close and reciprocal interplay between these two modules to efficiently and effectively solve the open problem of simultaneous tracking and mapping in highly dynamic scenarios. We conducted extensive experiments using widely-used datasets and simulations, providing both qualitative and quantitative results to demonstrate S$^2$MAT's state-of-the-art performance in dynamic object detection, tracking, and high-quality static structure mapping. Additionally, we performed long-range robotic navigation in real-world urban scenarios spanning over 7 km, which included challenging obstacles like pedestrians and other traffic agents. The successful navigation provides a comprehensive test of S$^2$MAT's robustness, scalability, efficiency, quality, and its ability to benefit autonomous robots in wild scenarios without pre-built maps.
- Asia > China > Hong Kong (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)
A real-time dynamic obstacle tracking and mapping system for UAV navigation and collision avoidance with an RGB-D camera
Xu, Zhefan, Zhan, Xiaoyang, Chen, Baihan, Xiu, Yumeng, Yang, Chenhao, Shimada, Kenji
The real-time dynamic environment perception has become vital for autonomous robots in crowded spaces. Although the popular voxel-based mapping methods can efficiently represent 3D obstacles with arbitrarily complex shapes, they can hardly distinguish between static and dynamic obstacles, leading to the limited performance of obstacle avoidance. While plenty of sophisticated learning-based dynamic obstacle detection algorithms exist in autonomous driving, the quadcopter's limited computation resources cannot achieve real-time performance using those approaches. To address these issues, we propose a real-time dynamic obstacle tracking and mapping system for quadcopter obstacle avoidance using an RGB-D camera. The proposed system first utilizes a depth image with an occupancy voxel map to generate potential dynamic obstacle regions as proposals. With the obstacle region proposals, the Kalman filter and our continuity filter are applied to track each dynamic obstacle. Finally, the environment-aware trajectory prediction method is proposed based on the Markov chain using the states of tracked dynamic obstacles. We implemented the proposed system with our custom quadcopter and navigation planner. The simulation and physical experiments show that our methods can successfully track and represent obstacles in dynamic environments in real-time and safely avoid obstacles.
- Transportation > Air (0.68)
- Information Technology > Robotics & Automation (0.49)
Multi-object Detection, Tracking and Prediction in Rugged Dynamic Environments
Huang, Shixing, Wang, Zhihao, Ouyang, Junyuan, Chen, Haoyao
Multi-object tracking (MOT) has important applications in monitoring, logistics, and other fields. This paper develops a real-time multi-object tracking and prediction system in rugged environments. A 3D object detection algorithm based on Lidar-camera fusion is designed to detect the target objects. Based on the Hungarian algorithm, this paper designs a 3D multi-object tracking algorithm with an adaptive threshold to realize the stable matching and tracking of the objects. We combine Memory Augmented Neural Networks (MANN) and Kalman filter to achieve 3D trajectory prediction on rugged terrains. Besides, we realize a new dynamic SLAM by using the results of multi-object tracking to remove dynamic points for better SLAM performance and static map. To verify the effectiveness of the proposed multi-object tracking and prediction system, several simulations and physical experiments are conducted. The results show that the proposed system can track dynamic objects and provide future trajectory and a more clean static map in real-time.
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
A Study of Shared-Control with Force Feedback for Obstacle Avoidance in Whole-body Telelocomotion of a Wheeled Humanoid
Baek, DongHoon, Chen, Yu, Chang, null, Ramos, Joao
Teleoperation has emerged as an alternative solution to fully-autonomous systems for achieving human-level capabilities on humanoids. Specifically, teleoperation with whole-body control is a promising hands-free strategy to command humanoids but demands more physical and mental effort. To mitigate this limitation, researchers have proposed shared-control methods incorporating robot decision-making to aid humans on low-level tasks, further reducing operation effort. However, shared-control methods for wheeled humanoid telelocomotion on a whole-body level has yet to be explored. In this work, we study how whole-body feedback affects the performance of different shared-control methods for obstacle avoidance in diverse environments. A Time-Derivative Sigmoid Function (TDSF) is proposed to generate more intuitive force feedback from obstacles. Comprehensive human experiments were conducted, and the results concluded that force feedback enhances the whole-body telelocomotion performance in unfamiliar environments but could reduce performance in familiar environments. Conveying the robot's intention through haptics showed further improvements since the operator can utilize the force feedback for short-distance planning and visual feedback for long-distance planning.
- North America > United States > Illinois (0.04)
- Asia > Taiwan (0.04)
- Asia > South Korea > Gangwon-do > Pyeongchang (0.04)
PrognoseNet: A Generative Probabilistic Framework for Multimodal Position Prediction given Context Information
Kurbiel, Thomas, Sachdeva, Akash, Zhao, Kun, Buehren, Markus
The ability to predict multiple possible future positions of the ego-vehicle given the surrounding context while also estimating their probabilities is key to safe autonomous driving. Most of the current state-of-the-art Deep Learning approaches are trained on trajectory data to achieve this task. However trajectory data captured by sensor systems is highly imbalanced, since by far most of the trajectories follow straight lines with an approximately constant velocity. This poses a huge challenge for the task of predicting future positions, which is inherently a regression problem. Current state-of-the-art approaches alleviate this problem only by major preprocessing of the training data, e.g. resampling, clustering into anchors etc. In this paper we propose an approach which reformulates the prediction problem as a classification task, allowing for powerful tools, e.g. focal loss, to combat the imbalance. To this end we design a generative probabilistic model consisting of a deep neural network with a Mixture of Gaussian head. A smart choice of the latent variable allows for the reformulation of the log-likelihood function as a combination of a classification problem and a much simplified regression problem. The output of our model is an estimate of the probability density function of future positions, hence allowing for prediction of multiple possible positions while also estimating their probabilities. The proposed approach can easily incorporate context information and does not require any preprocessing of the data.
- Research Report > Promising Solution (0.34)
- Overview > Innovation (0.34)
Watch Out, Pro Racers: These Drones Just Learned to Fly Solo
These days any old schlub can pilot a drone without cratering it, what with good old autopilot tech, but there are drone pilots out there whose abilities push the limits of human cognition. Drone racing is a truly insane endeavor (now with its very own Drone Racing League!) with human pilots banking around corners and through obstacles at over 100 miles per hour, navigating it all through the craft's onboard camera. It takes an almost unimaginable amount of coordination--but, alas, even this highly skilled job is in danger of automation. Researchers have developed a system that allows drones to autonomously navigate an obstacle course of gates with 100 percent accuracy--that is, the robots don't crash into something and explode. Not only that, because of the clever way the researchers trained the drones, the machines can adapt if a wily human moves a gate mid-run, completing a course that looks different than when they started.
- Information Technology > Robotics & Automation (1.00)
- Transportation > Air (0.72)
Need for DYNAMICAL Machine Learning: Bayesian exact recursive estimation
In my recent blog, Marrying Kalman Filtering & Machine Learning, we saw the merger of Bayesian exact recursive estimation (algorithm for which is Kalman Filter/Smoother in the linear, Gaussian case) and Machine Learning. We developed a solution called Kernel Projection Kalman Filter for business applications that require static or dynamical, dynamical or time-varying dynamical, linear or non-linear Machine Learning, i.e., pretty much all applications - therefore, Kernel Projection Kalman Filter is a "universal" solution . . . But who needs anything more than STATIC Machine Learning (ML)? Indeed, university courses in ML largely teach static ML. Given a set of inputs and outputs, find a static map between the two during supervised "Training" and use this static map for business purposes during "Operation" (which is called "Testing" during pre-operation evaluation).