Goto

Collaborating Authors

 dribblebot


DexDribbler: Learning Dexterous Soccer Manipulation via Dynamic Supervision

Hu, Yutong, Wen, Kehan, Yu, Fisher

arXiv.org Artificial Intelligence

Learning dexterous locomotion policy for legged robots is becoming increasingly popular due to its ability to handle diverse terrains and resemble intelligent behaviors. However, joint manipulation of moving objects and locomotion with legs, such as playing soccer, receive scant attention in the learning community, although it is natural for humans and smart animals. A key challenge to solve this multitask problem is to infer the objectives of locomotion from the states and targets of the manipulated objects. The implicit relation between the object states and robot locomotion can be hard to capture directly from the training experience. We propose adding a feedback control block to compute the necessary body-level movement accurately and using the outputs as dynamic joint-level locomotion supervision explicitly. We further utilize an improved ball dynamic model, an extended context-aided estimator, and a comprehensive ball observer to facilitate transferring policy learned in simulation to the real world. We observe that our learning scheme can not only make the policy network converge faster but also enable soccer robots to perform sophisticated maneuvers like sharp cuts and turns on flat surfaces, a capability that was lacking in previous methods. Video and code are available at https://github.com/SysCV/soccer-player


Now playing: DribbleBot

MIT Technology Review

"Today, most robots are wheeled. But imagine that there's a disaster scenario, flooding, or an earthquake, and we want robots to aid humans in the search-and-rescue process. We need the machines to go over terrains that aren't flat, and wheeled robots can't traverse those landscapes," says Pulkit Agrawal, EECS professor and director of the Improbable AI Lab. Previous attempts to program soccer-playing robots have assumed flat, hard ground, and the bot wasn't "trying to run and manipulate the ball simultaneously," says Ji. On the hardware side, the robot has sensors that let it perceive the environment, actuators that let it apply forces, and a computer "brain" that converts sensor data into actions--all in one compact, autonomous package. "Our robot can go in the wild because it carries all its sensors, cameras, and [computing resources] on board," Margolis says. The team's next steps include teaching it new skills like handling slopes and stairs.

  Industry: Leisure & Entertainment > Sports > Soccer (0.30)

How MIT taught a quadruped to play soccer

#artificialintelligence

A research team at MIT's Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), taught a Unitree Go1 quadruped to dribble a soccer ball on various terrains. DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow, adapt its varied impact on the ball's motion and get up and recover the ball after falling. The team used simulation to teach the robot how to actuate its legs during dribbling. This allowed the robot to achieve hard-to-script skills for responding to diverse terrains much quicker than training in the real world. Because the team had to load its robot and other assets into the simulation and set physical parameters, they could simulate 4,000 versions of the quadruped in parallel in real-time, collecting data 4,000 times faster than using just one robot.


A four-legged robotic system for playing soccer on various terrains

Robohub

Researchers created DribbleBot, a system for in-the-wild dribbling on diverse natural terrains including sand, gravel, mud, and snow using onboard sensing and computing. In addition to these football feats, such robots may someday aid humans in search-and-rescue missions. If you've ever played soccer with a robot, it's a familiar feeling. A four-legged robot is hustling toward you, dribbling with determination. Researchers from MIT's Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a legged robotic system that can dribble a soccer ball under the same conditions as humans.


DribbleBot: Dynamic Legged Manipulation in the Wild

Ji, Yandong, Margolis, Gabriel B., Agrawal, Pulkit

arXiv.org Artificial Intelligence

DribbleBot (Dexterous Ball Manipulation with a Legged Robot) is a legged robotic system that can dribble a soccer ball under the same real-world conditions as humans (i.e., in-the-wild). We adopt the paradigm of training policies in simulation using reinforcement learning and transferring them into the real world. We overcome critical challenges of accounting for variable ball motion dynamics on different terrains and perceiving the ball using body-mounted cameras under the constraints of onboard computing. Our results provide evidence that current quadruped platforms are well-suited for studying dynamic whole-body control problems involving simultaneous locomotion and manipulation directly from sensory observations.