Not enough data to create a plot.
Try a different view from the menu above.
Plancher, Brian
RobotCore: An Open Architecture for Hardware Acceleration in ROS 2
Mayoral-Vilches, Víctor, Neuman, Sabrina M., Plancher, Brian, Reddi, Vijay Janapa
Hardware acceleration can revolutionize robotics, enabling new applications by speeding up robot response times while remaining power-efficient. However, the diversity of acceleration options makes it difficult for roboticists to easily deploy accelerated systems without expertise in each specific hardware platform. In this work, we address this challenge with RobotCore, an architecture to integrate hardware acceleration in the widely-used ROS 2 robotics software framework. This architecture is target-agnostic (supports edge, workstation, data center, or cloud targets) and accelerator-agnostic (supports both FPGAs and GPUs). It builds on top of the common ROS 2 build system and tools and is easily portable across different research and commercial solutions through a new firmware layer. We also leverage the Linux Tracing Toolkit next generation (LTTng) for low-overhead real-time tracing and benchmarking. To demonstrate the acceleration enabled by this architecture, we use it to deploy a ROS 2 perception computational graph on a CPU and FPGA. We employ our integrated tracing and benchmarking to analyze bottlenecks, uncovering insights that guide us to improve FPGA communication efficiency. In particular, we design an intra-FPGA ROS 2 node communication queue to enable faster data flows, and use it in conjunction with FPGA-accelerated nodes to achieve a 24.42% speedup over a CPU.
Just Round: Quantized Observation Spaces Enable Memory Efficient Learning of Dynamic Locomotion
Grossman, Lev, Plancher, Brian
Importantly, unlike simply reducing the number of observations Deep reinforcement learning (DRL) continues to see increased stored in the buffer, which decreases the memory attention by the robotics community due to its footprint at the cost of reduced learning performance, our ability to learn complex behaviors in both simulated and quantization scheme is able to reduce memory usage without real environments. These methods have been successfully impacting the training performance. We present experiments applied to a host of robotic tasks including: dexterous manipulation across four popular simulated robotic locomotion domains, [1], quadrupedal locomotion [2], and high-speed using two of the most popular DRL algorithms, the on-policy drone racing [3]. Despite these successes, DRL remains Proximal Policy Optimization (PPO) and off-policy Soft largely sample inefficient, depending on enormous amounts Actor-Critic (SAC), and find that our approach can reduce of training data to learn. As much of this data is kept in the memory footprint by as much as 4.2 without impacting replay buffers during training, DRL is extremely memory training performance.
GRiD: GPU-Accelerated Rigid Body Dynamics with Analytical Gradients
Plancher, Brian, Neuman, Sabrina M., Ghosal, Radhika, Kuindersma, Scott, Reddi, Vijay Janapa
We introduce GRiD: a GPU-accelerated library for computing rigid body dynamics with analytical gradients. GRiD was designed to accelerate the nonlinear trajectory optimization subproblem used in state-of-the-art robotic planning, control, and machine learning, which requires tens to hundreds of naturally parallel computations of rigid body dynamics and their gradients at each iteration. GRiD leverages URDF parsing and code generation to deliver optimized dynamics kernels that not only expose GPU-friendly computational patterns, but also take advantage of both fine-grained parallelism within each computation and coarse-grained parallelism between computations. Through this approach, when performing multiple computations of rigid body dynamics algorithms, GRiD provides as much as a 7.2x speedup over a state-of-the-art, multi-threaded CPU implementation, and maintains as much as a 2.5x speedup when accounting for I/O overhead. We release GRiD as an open-source library for use by the wider robotics community.