Backes, Paul
Icy Moon Surface Simulation and Stereo Depth Estimation for Sampling Autonomy
Bhaskara, Ramchander, Georgakis, Georgios, Nash, Jeremy, Cameron, Marissa, Bowkett, Joseph, Ansar, Adnan, Majji, Manoranjan, Backes, Paul
Sampling autonomy for icy moon lander missions requires understanding of topographic and photometric properties of the sampling terrain. Unavailability of high resolution visual datasets (either bird-eye view or point-of-view from a lander) is an obstacle for selection, verification or development of perception systems. We attempt to alleviate this problem by: 1) proposing Graphical Utility for Icy moon Surface Simulations (GUISS) framework, for versatile stereo dataset generation that spans the spectrum of bulk photometric properties, and 2) focusing on a stereo-based visual perception system and evaluating both traditional and deep learning-based algorithms for depth estimation from stereo matching. The surface reflectance properties of icy moon terrains (Enceladus and Europa) are inferred from multispectral datasets of previous missions. With procedural terrain generation and physically valid illumination sources, our framework can fit a wide range of hypotheses with respect to visual representations of icy moon terrains. This is followed by a study over the performance of stereo matching algorithms under different visual hypotheses. Finally, we emphasize the standing challenges to be addressed for simulating perception data assets for icy moons such as Enceladus and Europa. Our code can be found here: https://github.com/nasa-jpl/guiss.
Granular Gym: High Performance Simulation for Robotic Tasks with Granular Materials
Millard, David, Pastor, Daniel, Bowkett, Joseph, Backes, Paul, Sukhatme, Gaurav S.
Granular materials are of critical interest to many robotic tasks in planetary science, construction, and manufacturing. However, the dynamics of granular materials are complex and often computationally very expensive to simulate. We propose a set of methodologies and a system for the fast simulation of granular materials on Graphics Processing Units (GPUs), and show that this simulation is fast enough for basic training with Reinforcement Learning algorithms, which currently require many dynamics samples to achieve acceptable performance. Our method models granular material dynamics using implicit timestepping methods for multibody rigid contacts, as well as algorithmic techniques for efficient parallel collision detection between pairs of particles and between particle and arbitrarily shaped rigid bodies, and programming techniques for minimizing warp divergence on Single-Instruction, Multiple-Thread (SIMT) chip architectures. We showcase our simulation system on several environments targeted toward robotic tasks, and release our simulator as an open-source tool.
An Integrated Planning and Scheduling Prototype for Automated Mars Rover Command Generation
Sherwood, Robert (Jet Propulsion Laboratory, California Institute of Technology) | Mishkin, Andrew (Jet Propulsion Laboratory, California Institute of Technology) | Chien, Steve (Jet Propulsion Laboratory, California Institute of Technology) | Estlin, Tara (Jet Propulsion Laboratory, California Institute of Technology) | Backes, Paul (Jet Propulsion Laboratory, California Institute of Technology) | Cooper, Brian (Jet Propulsion Laboratory, California Institute of Technology) | Rabideau, Gregg (Jet Propulsion Laboratory, California Institute of Technology) | Engelhardt, Barbara (Jet Propulsion Laboratory, California Institute of Technology)
With the arrival of the Pathfinder spacecraft in 1997, NASA began a series of missions to explore the surface of Mars with robotic vehicles. The Pathfinder mission included Sojourner, a six-wheeled rover with cameras and a spectrometer for determining the composition of rocks. The mission was a success in terms of delivering a rover to the surface, but illustrated the need for greater autonomy on future surface missions. The operations process for Sojourner involved scientists submitting to rover operations engineers an image taken by the rover or its companion lander, with interesting rocks circled on the images. The rover engineers would then manually construct a one-day sequence of events and commands for the rover to collect data of the rocks of interest. The commands would be uplinked to the rover for execution the following day. This labor-intensive process was not sustainable on a daily basis for even the simple Sojourner rover for the two-month mission. Future rovers will travel longer distances, visit multiple sites each day, contain several instruments, and have mission duration of a year or more. Manual planning with so many operational constraints and goals will be unmanageable. This paper discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from high-level goals using AI-based planning software.