In recent years, robots have gained artificial vision, touch, and even smell. "Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says. The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects.
MIT researchers developed a picking robot that combines vision with radio frequency (RF) sensing to find and grasps objects, even if they're hidden from view. The technology could aid fulfilment in e-commerce warehouses. System uses penetrative radio frequency to pinpoint items, even when they're hidden from view. In recent years, robots have gained artificial vision, touch, and even smell. "Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib.
A busy commuter is ready to walk out the door, only to realize they've misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys. Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.
Fifty years ago, the first industrial robot arm (called Unimate) assembled a simple breakfast of toast, coffee, and champagne. While it might have looked like a seamless feat, every movement and placement was coded with careful consideration. Even with today's more intelligent and adaptive robots, this task remains difficult for machines with rigid hands. They tend to work only in structured environments with predefined shapes and locations, and typically can't cope with uncertainties in placement or form. In recent years, though, roboticists have come to grips with this problem by making fingers out of soft, flexible materials like rubber.
In a preprint paper published this week on Arxiv.org, Nvidia and Stanford University researchers propose a novel approach to transferring AI models trained in simulation to real-world autonomous machines. It uses segmentation as the interface between perception and control, leading to what the coauthors characterize as "high success" in workloads like robot grasping. Simulators have advantages over the real world when it comes to model training in that they're safe and almost infinitely scalable. But generalizing strategies learned in simulation to real-world machines -- whether autonomous cars, robots, or drones -- requires adjustment, because even the most accurate simulators can't account for every perturbation.