Goto

Collaborating Authors

Results


Autonomous Overtaking in Gran Turismo Sport Using Curriculum Reinforcement Learning

arXiv.org Artificial Intelligence

Professional race-car drivers can execute extreme overtaking maneuvers. However, existing algorithms for autonomous overtaking either rely on simplified assumptions about the vehicle dynamics or try to solve expensive trajectory-optimization problems online. When the vehicle approaches its physical limits, existing model-based controllers struggle to handle highly nonlinear dynamics, and cannot leverage the large volume of data generated by simulation or real-world driving. To circumvent these limitations, we propose a new learning-based method to tackle the autonomous overtaking problem. We evaluate our approach in the popular car racing game Gran Turismo Sport, which is known for its detailed modeling of various cars and tracks. By leveraging curriculum learning, our approach leads to faster convergence as well as increased performance compared to vanilla reinforcement learning. As a result, the trained controller outperforms the built-in model-based game AI and achieves comparable overtaking performance with an experienced human driver.


Model-based versus Model-free Deep Reinforcement Learning for Autonomous Racing Cars

arXiv.org Artificial Intelligence

Despite the rich theoretical foundation of model-based deep reinforcement learning (RL) agents, their effectiveness in real-world robotics-applications is less studied and understood. In this paper, we, therefore, investigate how such agents generalize to real-world autonomous-vehicle control-tasks, where advanced model-free deep RL algorithms fail. In particular, we set up a series of time-lap tasks for an F1TENTH racing robot, equipped with high-dimensional LiDAR sensors, on a set of test tracks with a gradual increase in their complexity. In this continuous-control setting, we show that model-based agents capable of learning in imagination, substantially outperform model-free agents with respect to performance, sample efficiency, successful task completion, and generalization. Moreover, we show that the generalization ability of model-based agents strongly depends on the observation-model choice. Finally, we provide extensive empirical evidence for the effectiveness of model-based agents provided with long enough memory horizons in sim2real tasks.


Driving on the cutting edge of autonomous vehicle tech

#artificialintelligence

In October, a modified Dallara-15 Indy Lights race car programmed by MIT Driverless will hit the famed Indianapolis Motor Speedway at speeds of up to 120 miles per hour. The Indy Autonomous Challenge (IAC) is the world's first head-to-head, high-speed autonomous race. It offers MIT Driverless a chance to grab a piece of the $1.5 million purse while outmaneuvering fellow university innovators on what is arguably the most iconic racecourse. But the IAC has implications beyond the track. Stakeholders for the event include Sebastian Thrun, a former winner of the DARPA Grand Challenge for autonomous vehicles, and Reilly Brennan, a lecturer at Stanford University's Center for Automotive Research and a partner at Trucks Venture Capital.


Real-Time Optimal Trajectory Planning for Autonomous Vehicles and Lap Time Simulation Using Machine Learning

arXiv.org Artificial Intelligence

The widespread development of driverless vehicles has led to the formation of autonomous racing competitions, where the high speeds and fierce rivalry in motorsport provide a testbed to accelerate technology development. A particular challenge for an autonomous vehicle is that of identifying a target trajectory - or in the case of a racing car, the ideal racing line. Many existing approaches to identifying the racing line are either not the time-optimal solutions, or have solution times which are computationally expensive, thus rendering them unsuitable for real-time application using on-board processing hardware. This paper describes a machine learning approach to generating an accurate prediction of the racing line in real-time on desktop processing hardware. The proposed algorithm is a dense feed-forward neural network, trained using a dataset comprising racing lines for a large number of circuits calculated via a traditional optimal control lap time simulation. The network is capable of predicting the racing line with a mean absolute error of +/-0.27m, meaning that the accuracy outperforms a human driver, and is comparable to other parts of the autonomous vehicle control system. The system generates predictions within 33ms, making it over 9,000 times faster than traditional methods of finding the optimal racing line. Results suggest that a data-driven approach may therefore be favourable for real-time generation of near-optimal racing lines than traditional computational methods.


Learning from Simulation, Racing in Reality

arXiv.org Artificial Intelligence

We present a reinforcement learning-based solution to autonomously race on a miniature race car platform. We show that a policy that is trained purely in simulation using a relatively simple vehicle model, including model randomization, can be successfully transferred to the real robotic setup. We achieve this by using novel policy output regularization approach and a lifted action space which enables smooth actions but still aggressive race car driving. We show that this regularized policy does outperform the Soft Actor Critic (SAC) baseline method, both in simulation and on the real car, but it is still outperformed by a Model Predictive Controller (MPC) state of the art method. The refinement of the policy with three hours of real-world interaction data allows the reinforcement learning policy to achieve lap times similar to the MPC controller while reducing track constraint violations by 50%.


IDE-Net: Interactive Driving Event and Pattern Extraction from Human Data

arXiv.org Artificial Intelligence

Autonomous vehicles (AVs) need to share the road with multiple, heterogeneous road users in a variety of driving scenarios. It is overwhelming and unnecessary to carefully interact with all observed agents, and AVs need to determine whether and when to interact with each surrounding agent. In order to facilitate the design and testing of prediction and planning modules of AVs, in-depth understanding of interactive behavior is expected with proper representation, and events in behavior data need to be extracted and categorized automatically. Answers to what are the essential patterns of interactions are also crucial for these motivations in addition to answering whether and when. Thus, learning to extract interactive driving events and patterns from human data for tackling the whether-when-what tasks is of critical importance for AVs. There is, however, no clear definition and taxonomy of interactive behavior, and most of the existing works are based on either manual labelling or hand-crafted rules and features. In this paper, we propose the Interactive Driving event and pattern Extraction Network (IDE-Net), which is a deep learning framework to automatically extract interaction events and patterns directly from vehicle trajectories. In IDE-Net, we leverage the power of multi-task learning and proposed three auxiliary tasks to assist the pattern extraction in an unsupervised fashion. We also design a unique spatial-temporal block to encode the trajectory data. Experimental results on the INTERACTION dataset verified the effectiveness of such designs in terms of better generalizability and effective pattern extraction. We find three interpretable patterns of interactions, bringing insights for driver behavior representation, modeling and comprehension. Both objective and subjective evaluation metrics are adopted in our analysis of the learned patterns.


Autonomous Race Car Slams Right into Wall Seconds after Starting Test Lap

#artificialintelligence

Roborace team SIT Acronis Autonomous suffered a "computer says no" moment on Thursday when its race car drove straight into a wall, mere seconds after it had started driving. If you're familiar with the Little Britain T.V. show, you'll understand the meaning of "computer says no." And it couldn't be more true for this moment. Luckily no one was hurt. But, you live and you learn, and this is one of the ways people working in robotics learn how to improve their systems.


Watch a self-driving Roborace car drive directly into a wall

Engadget

Robots still have some trouble handling the basics when put to the test, apparently. Roborace team SIT Acronis Autonomous suffered an embarrassment in round one of the Season Beta 1.1 race after its self-driving car abruptly drove directly into a wall. It's not certain what led to the mishap, but track conditions clearly weren't at fault -- the car had been rounding a gentle curve and wasn't racing against others at the same time. It wasn't the only car to suffer a problem, either. Autonomous Racing Graz's vehicle had positioning issues that got it "lost" on the track and cut its race short.


Self-driving cars will hit the Indianapolis Motor Speedway in a landmark A.I. race

#artificialintelligence

Next year, a squad of souped-up Dallara race cars will reach speeds of up to 200 miles per hour as they zoom around the legendary Indianapolis Motor Speedway to discover whether a computer could be the next Mario Andretti. The planned Indy Autonomous Challenge--taking place in October 2021 in Indianapolis--is intended for 31 university computer science and engineering teams to push the limits of current self-driving car technology. There will be no human racers sitting inside the cramped cockpits of the Dallara IL-15 race cars. Instead, onboard computer systems will take their place, outfitted with deep-learning software enabling the vehicles to drive themselves. In order to win, a team's autonomous car must be able to complete 20 laps--which equates to a little less than 50 miles in distance--and cross the finish line first in 25 minutes or less.


Autonomous Vehicles to Race at Indianapolis Motor Speedway

WSJ.com: WSJD - Technology

At stake is a $1.5 million cash prize, but organizers and participants say that the real goal of the competition is to catapult autonomous vehicle technology forward.