MIT researchers developed a picking robot that combines vision with radio frequency (RF) sensing to find and grasps objects, even if they're hidden from view. The technology could aid fulfilment in e-commerce warehouses. System uses penetrative radio frequency to pinpoint items, even when they're hidden from view. In recent years, robots have gained artificial vision, touch, and even smell. "Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib.
In recent years, robots have gained artificial vision, touch, and even smell. "Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says. The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects.
MIT researchers have developed a robot that can detect and grab objects that are hidden behind walls or pieces of clutter. The system, called RF-Grasp, uses radio waves to locate items beyond the line-of-sight of a robot's cameras. It could help warehouse robots grab customer orders or tools that are occluded behind obstacles. If an object is concealed, they typically need to explore the environment and search for the item. Unlike visible light and infrared, RF (radio frequency) signals can traverse cardboard boxes, wooden walls, plastic covers, and colored glass to perceive objects fitted with RFID tags.
We present the design, implementation, and evaluation of RF-Grasp, a robotic system that can grasp fully-occluded objects in unknown and unstructured environments. Unlike prior systems that are constrained by the line-of-sight perception of vision and infrared sensors, RF-Grasp employs RF (Radio Frequency) perception to identify and locate target objects through occlusions, and perform efficient exploration and complex manipulation tasks in non-line-of-sight settings. RF-Grasp relies on an eye-in-hand camera and batteryless RFID tags attached to objects of interest. It introduces two main innovations: (1) an RF-visual servoing controller that uses the RFID's location to selectively explore the environment and plan an efficient trajectory toward an occluded target, and (2) an RF-visual deep reinforcement learning network that can learn and execute efficient, complex policies for decluttering and grasping. We implemented and evaluated an end-to-end physical prototype of RF-Grasp and a state-of-the-art baseline. We demonstrate it improves success rate and efficiency by up to 40-50% in cluttered settings. We also demonstrate RF-Grasp in novel tasks such mechanical search of fully-occluded objects behind obstacles, opening up new possibilities for robotic manipulation.
At some point in your life, you've probably used a combination of sight and touch to find something hidden beneath your couch cushions. And for a while now, robotics researchers have tried to give their creations that same capability. Back in 2019, a team of scientists from the Massachusetts Institute of Technology (MIT) used a combination of tactile sensors and AI to allow a robot to identify objects by touch. A separate group of scientists from MIT has now built a machine that can find things it can't see initially. The aptly named RF Grasp depends on a wrist-mounted camera and an RF reader to hone in and pick up an object.