Goto

Collaborating Authors

 deepracer


Follow the Soldiers with Optimized Single-Shot Multibox Detection and Reinforcement Learning

Hossain, Jumman, Momtaz, Maliha

arXiv.org Artificial Intelligence

Nowadays, autonomous cars are gaining traction due to their numerous potential applications on battlefields and in resolving a variety of other real-world challenges. The main goal of our project is to build an autonomous system using DeepRacer which will follow a specific person (for our project, a soldier) when they will be moving in any direction. Two main components to accomplish this project is an optimized Single-Shot Multibox Detection (SSD) object detection model and a Reinforcement Learning (RL) model. We accomplished the task using SSD Lite instead of SSD and at the end, compared the results among SSD, SSD with Neural Computing Stick (NCS), and SSD Lite. Experimental results show that SSD Lite gives better performance among these three techniques and exhibits a considerable boost in inference speed (~2-3 times) without compromising accuracy.


3 ways to get into reinforcement learning

#artificialintelligence

When I was in graduate school in the 1990s, one of my favorite classes was neural networks. Back then, we didn't have access to TensorFlow, PyTorch, or Keras; we programmed neurons, neural networks, and learning algorithms by hand with the formulas from textbooks. We didn't have access to cloud computing, and we coded sequential experiments that often ran overnight. There weren't platforms like Alteryx, Dataiku, SageMaker, or SAS to enable a machine learning proof of concept or manage the end-to-end MLops lifecycles. I was most interested in reinforcement learning algorithms, and I recall writing hundreds of reward functions to stabilise an inverted pendulum.


AWS DeepRacer League 2021 Update #11 End of April Special – AWS DeepRacer Community Blog

#artificialintelligence

Time to review the final April results. Who has made it into the Finale? Who will be racing in the Pro Division? AWS DeepRacer is a 1/18th scale autonomous race car but also much more. It is a complete program that has helped thousands of employees in numerous organizations begin their educational journey into machine learning through fun and rivalry.


Amazon releases DeepRacer software in open source

#artificialintelligence

In November 2018, Amazon launched AWS DeepRacer, a car about the size of a shoebox that runs on AI models trained in a virtual environment with reinforcement learning techniques. DeepRacer has expanded since then, with a women's league and new miniature race cars. Starting today, Amazon is making the DeepRacer device software available in open source. The pandemic has boosted automation and robotics in the enterprise. The global market for robots is expected to grow at a compound annual growth rate of around 26% to reach just under $210 billion by 2025, according to Statista.


AWS DeepRacer League 2021 Update #1 – AWS DeepRacer Community Blog

#artificialintelligence

The first races of 2021 have begun with a wealth of new features. On behalf of the AWS Machine Learning Community I'm pleased to present the latest report from the race tracks! AWS DeepRacer is a 1/18th scale autonomous race car but also much more. It is a complete program that has helped thousands of employees in numerous organizations begin their educational journey into machine learning through fun and rivalry. Visit AWS DeepRacer page to learn more about how it can help you and your organization begin and progress the journey towards machine learning.


3 ways to get into reinforcement learning

#artificialintelligence

When I was in graduate school in the 1990s, one of my favorite classes was neural networks. Back then, we didn't have access to TensorFlow, PyTorch, or Keras; we programmed neurons, neural networks, and learning algorithms by hand with the formulas from textbooks. We didn't have access to cloud computing, and we coded sequential experiments that often ran overnight. There weren't platforms like Alteryx, Dataiku, SageMaker, or SAS to enable a machine learning proof of concept or manage the end-to-end MLops lifecycles. I was most interested in reinforcement learning algorithms, and I recall writing hundreds of reward functions to stabilize an inverted pendulum.


Optimizing the cost of training AWS DeepRacer reinforcement learning models

#artificialintelligence

AWS DeepRacer is a cloud-based 3D racing simulator, an autonomous 1/18th scale race car driven by reinforcement learning, and a global racing league. Reinforcement learning (RL), an advanced machine learning (ML) technique, enables models to learn complex behaviors without labeled training data and make short-term decisions while optimizing for longer-term goals. But as we humans can attest, learning something well takes time--and time is money. You can build and train a simple "all-wheels-on-track" model in the AWS DeepRacer console in just a couple of hours. However, if you're building complex models involving multiple parameters, a reward function using trigonometry, or generally diving deep into RL, there are steps you can take to optimize the cost of training.


AWS DeepRacer Update – New Features & New Racing Opportunities Amazon Web Services

#artificialintelligence

I first wrote about AWS DeepRacer at this time last year, and described it as an opportunity for you to get some hands-on experience with Reinforcement Learning (RL). Along with the rest of the AWS team, I believe that you should always be improving your existing skills and building new ones. We launched the AWS DeepRacer car and the AWS DeepRacer League so that you could have the opportunity to get experience and new skills in a fun, competitive environment. In less than a year, tens of thousands of developers have participated in hands-on and virtual races located all over the world. Their enthusiasm and energy have been inspiring, as has been the creativity.


Multi-Vehicle Mixed-Reality Reinforcement Learning for Autonomous Multi-Lane Driving

Mitchell, Rupert, Fletcher, Jenny, Panerati, Jacopo, Prorok, Amanda

arXiv.org Artificial Intelligence

Autonomous driving promises to transform road transport. Multi-vehicle and multi-lane scenarios, however, present unique challenges due to constrained navigation and unpredictable vehicle interactions. Learning-based methods---such as deep reinforcement learning---are emerging as a promising approach to automatically design intelligent driving policies that can cope with these challenges. Yet, the process of safely learning multi-vehicle driving behaviours is hard: while collisions---and their near-avoidance---are essential to the learning process, directly executing immature policies on autonomous vehicles raises considerable safety concerns. In this article, we present a safe and efficient framework that enables the learning of driving policies for autonomous vehicles operating in a shared workspace, where the absence of collisions cannot be guaranteed. Key to our learning procedure is a sim2real approach that uses real-world online policy adaptation in a mixed-reality setup, where other vehicles and static obstacles exist in the virtual domain. This allows us to perform safe learning by simulating (and learning from) collisions between the learning agent(s) and other objects in virtual reality. Our results demonstrate that, after only a few runs in mixed-reality, collisions are significantly reduced.


DeepRacer: Educational Autonomous Racing Platform for Experimentation with Sim2Real Reinforcement Learning

Balaji, Bharathan, Mallya, Sunil, Genc, Sahika, Gupta, Saurabh, Dirac, Leo, Khare, Vineet, Roy, Gourav, Sun, Tao, Tao, Yunzhe, Townsend, Brian, Calleja, Eddie, Muralidhara, Sunil, Karuppasamy, Dhanasekar

arXiv.org Artificial Intelligence

-- DeepRacer is a platform for end-to-end experimentation with RL and can be used to systematically investigate the key challenges in developing intelligent control systems. Using the platform, we demonstrate how a 1/18th scale car can learn to drive autonomously using RL with a monocular camera. It is trained in simulation with no additional tuning in physical world and demonstrates: 1) formulation and solution of a robust reinforcement learning algorithm, 2) narrowing the reality gap through joint perception and dynamics, 3) distributed on-demand compute architecture for training optimal policies, and 4) a robust evaluation method to identify when to stop training. It is the first successful large-scale deployment of deep reinforcement learning on a robotic control agent that uses only raw camera images as observations and a model-free learning method to perform robust path planning. Due to high sample complexity and safety requirements, it is common to train the RL agent in simulation [1], [5], [17]. To reduce training time and encourage exploration, the agent is usually trained with distributed rollouts [18], [19], [20], [21]. For a successful transfer to the real world, researchers use calibration [2], [22], domain randomization [23], [24], [25], [12], fine tuning with real world data [9], and learn features from a combination of simulation and real data [26], [27]. To experiment with robotic reinforcement learning, one needs to have expertise in many areas, access to a physical robot, an accurate robot model for simulations, a distributed training mechanism and customizability of the training procedure such as modifying the neural network and the loss function or introducing noise. For the uninitiated, dealing with this complexity is daunting and dissuades adoption. As a result, much of prior work is limited to a single robot [1], [23], [28] or a few robots [16]. We reduce the learning curve and alleviate development effort with DeepRacer.