steering command
Real-Time Obstacle Avoidance for a Mobile Robot Using CNN-Based Sensor Fusion
Obstacle avoidance is a critical component of the navigation stack required for mobile robots to operate effectively in complex and unknown environments. In this research, three end-to-end Convolutional Neural Networks (CNNs) were trained and evaluated offline and deployed on a differential-drive mobile robot for real-time obstacle avoidance to generate low-level steering commands from synchronized color and depth images acquired by an Intel RealSense D415 RGB-D camera in diverse environments. Offline evaluation showed that the NetConEmb model achieved the best performance with a notably low MedAE of $0.58 \times 10^{-3}$ rad/s. In comparison, the lighter NetEmb architecture, which reduces the number of trainable parameters by approximately 25\% and converges faster, produced comparable results with an RMSE of $21.68 \times 10^{-3}$ rad/s, close to the $21.42 \times 10^{-3}$ rad/s obtained by NetConEmb. Real-time navigation further confirmed NetConEmb's robustness, achieving a 100\% success rate in both known and unknown environments, while NetEmb and NetGated succeeded only in navigating the known environment.
- Africa > Middle East > Egypt > Giza Governorate > Giza (0.04)
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.04)
Bringing human-like reasoning to driverless car navigation
With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps -- usually generated by 3-D scans -- which are computationally intensive to generate and process on the fly.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.05)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Passenger (0.88)
- Information Technology > Robotics & Automation (0.73)
Towards Self-Supervised High Level Sensor Fusion
Khan, Qadeer, Schön, Torsten, Wenzel, Patrick
Abstract--In this paper, we present a framework to control a self-driving car by fusing raw information from RGB images and depth maps. A deep neural network architecture is used for mapping the vision and depth information, respectively, to steering commands. This fusion of information from two sensor sources allows to provide redundancy and fault tolerance in the presence of sensor failures. Even if one of the input sensors fails to produce the correct output, the other functioning sensor would still be able to maneuver the car. Such redundancy is crucial in the critical application of self-driving cars. The experimental results have showed that our method is capable of learning to use the relevant sensor information even when one of the sensors fail without any explicit signal. I. INTRODUCTION The fusion of different sensor modalities in the context of autonomous driving is a crucial aspect in order to be robust against sensor failures. We consider RGB and Depth camera sensors for training a control module to maneuver a self-driving car.
Semantic Label Reduction Techniques for Autonomous Driving
Khan, Qadeer, Schön, Torsten, Wenzel, Patrick
Abstract--Semantic segmentation maps can be used as input to models for maneuvering the controls of a car. However, not all labels may be necessary for making the control decision. One would expect that certain labels such as road lanes or sidewalks would be more critical in comparison with labels for vegetation or buildings which may not have a direct influence on the car's driving decision. In this appendix, we evaluate and quantify how sensitive and important the different semantic labels are for controlling the car. Labels that do not influence the driving decision are remapped to other classes, thereby simplifying the task by reducing to only labels critical for driving of the vehicle.
- Transportation > Ground > Road (0.51)
- Information Technology > Robotics & Automation (0.41)
- Automobiles & Trucks (0.41)
Adversarial Learning-Based On-Line Anomaly Monitoring for Assured Autonomy
Patel, Naman, Saridena, Apoorva Nandini, Choromanska, Anna, Krishnamurthy, Prashanth, Khorrami, Farshad
The paper proposes an on-line monitoring framework for continuous real-time safety/security in learning-based control systems (specifically application to a unmanned ground vehicle). We monitor validity of mappings from sensor inputs to actuator commands, controller-focused anomaly detection (CFAM), and from actuator commands to sensor inputs, system-focused anomaly detection (SFAM). CFAM is an image conditioned energy based generative adversarial network (EBGAN) in which the energy based discriminator distinguishes between proper and anomalous actuator commands. SFAM is based on an action condition video prediction framework to detect anomalies between predicted and observed temporal evolution of sensor data. We demonstrate the effectiveness of the approach on our autonomous ground vehicle for indoor environments and on Udacity dataset for outdoor environments.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (11 more...)
Modular Vehicle Control for Transferring Semantic Information to Unseen Weather Conditions using GANs
Wenzel, Patrick, Khan, Qadeer, Cremers, Daniel, Leal-Taixé, Laura
End-to-end supervised learning has shown promising results for self-driving cars, particularly under conditions for which it was trained. However, it may not necessarily perform well under unseen conditions. In this paper, we demonstrate how knowledge can be transferred from one weather condition for which semantic labels and steering commands are available to a completely new set of conditions for which we have no access to labeled data. The problem is addressed by dividing the task of vehicle control into independent perception and control modules, such that changing one does not affect the other. We train the control module only on the data for the available condition and keep it fixed even under new conditions. The perception module is then used as an interface between the new weather conditions and this control model. The perception module in turn is trained using semantic labels, which we assume are already available for the same weather condition on which the control model was trained. However, obtaining them for other conditions is a tedious and error-prone process. Therefore, we propose to use a generative adversarial network (GAN)-based model to retrieve the semantic information for the new conditions in an unsupervised manner. We introduce a master-servant architecture, where the master model (semantic labels available) trains the servant model (semantic labels not available). The servant model can then be used for steering the vehicle without retraining the control module.
- Transportation > Ground > Road (0.68)
- Information Technology (0.67)