Goto

Collaborating Authors

 berenson


AI is poised to automate today's most mundane manual warehouse task

MIT Technology Review

After much trial and error, Jacobi's founders, including roboticist Ken Goldberg, say they've cracked it. Their software, built upon research from a paper they published in Science Robotics in 2020, is designed to work with the four leading makers of robotic palletizing arms. It uses deep learning to generate a "first draft" of how an arm might move an item onto the pallet. Then it uses more traditional robotics methods, like optimization, to check whether the movement can be done safely and without glitches. Jacobi aims to replace the legacy methods customers are currently using to train their bots.


A way to let robots learn by listening will make them more useful

MIT Technology Review

Researchers at the Robotics and Embodied AI Lab at Stanford University set out to change that. They first built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone designed to filter out background noise. Human demonstrators used the gripper for a variety of household tasks and then used this data to teach robotic arms how to execute the task on their own. The team's new training algorithms help robots gather clues from audio signals to perform more effectively. "Thus far, robots have been training on videos that are muted," says Zeyi Liu, a PhD student at Stanford and lead author of the study.


A Robot's Nightmare Is a Burrito Full of Guac

The Atlantic - Technology

Welcome to the future: A robot can now prepare your favorite Chipotle order. Just as long as you don't want a burrito, taco, or quesadilla. The robot cannot handle those. Your order must be a burrito bowl or a salad, and it must be placed online. Then and only then--and once the robot makes it out of testing at the Chipotle Cultivate Center, in Irvine, California--your queso-covered barbacoa bowl might soon be assembled by the chain's new "automated digital makeline." Announced on Tuesday, the result of a collaboration between Chipotle and the automation company Hyphen looks like a standard stainless-steel Chipotle counter, burrito components arrayed on top.


'Fake' data helps robots learn the ropes faster: A way to expand training data sets for manipulation tasks improves the performance of robots by 40% or more

#artificialintelligence

Developed by robotics researchers at the University of Michigan, it could cut learning time for new materials and environments down to a few hours rather than a week or two. In simulations, the expanded training data set improved the success rate of a robot looping a rope around an engine block by more than 40% and nearly doubled the successes of a physical robot for a similar task. That task is among those a robot mechanic would need to be able to do with ease. But using today's methods, learning how to manipulate each unfamiliar hose or belt would require huge amounts of data, likely gathered for days or weeks, says Dmitry Berenson, U-M associate professor of robotics and senior author of a paper presented today at Robotics: Science and Systems in New York City. In that time, the robot would play around with the hose -- stretching it, bringing the ends together, looping it around obstacles and so on -- until it understood all the ways the hose could move.


Faster path planning for rubble-roving robots

#artificialintelligence

The improved algorithm path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time. A new algorithm speeds up path planning for robots that use arm-like appendages to maintain balance on treacherous terrain such as disaster areas or construction sites, U-M researchers have shown. The improved path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time. "In a collapsed building or on very rough terrain, a robot won't always be able to balance itself and move forward with just its feet," said Dmitry Berenson, associate professor of electrical and computer engineering and core faculty at the Robotics Institute. "You need new algorithms to figure out where to put both feet and hands. You need to coordinate all these limbs together to maintain stability, and what that boils down to is a very difficult problem."


A robot hand taught itself to solve a Rubik's Cube after creating its own training regime

#artificialintelligence

Over a year ago, OpenAI, the San Francisco–based for-profit AI research lab, announced that it had trained a robotic hand to manipulate a cube with remarkable dexterity. That might not sound earth-shattering. But in the AI world, it was impressive for two reasons. First, the hand had taught itself how to fidget with the cube using a reinforcement-learning algorithm, a technique modeled on the way animals learn. Second, all the training had been done in simulation, but it managed to successfully translate to the real world.


Robot Art Critics Are Rolling into a Museum Near You

#artificialintelligence

With a black bowler hat and a chiffon white scarf, Berenson certainly looks the part of a stuffy art connoisseur -- so long as you ignore the neural network poking out from his suit. Meet the Art Critic 2.0, built from gleaming metal and sleek sensors, with equal parts smarts and snob. The sage art critic once commanded considerable power in creative spheres, making or breaking an artist's career with a simple smirk of disapproval or a punishing review in next day's paper. But today, as the number of full-time art critics dwindles in newsrooms, a growing force of high-tech art experts is starting to pick up the slack by methodically decoding art's finest details. In Canada, the Roomba-esque kulturBOT snaps photos at exhibitions and uses an algorithm-powered "stream of consciousness" to tweet out the images with often nonsensical captions like "panting with love of danger" or "streaked with the nocturnal vibration."


Robot Art Critics Are Rolling into a Museum Near You

#artificialintelligence

With a black bowler hat and a chiffon white scarf, Berenson certainly looks the part of a stuffy art connoisseur -- so long as you ignore the neural network poking out from his suit. Meet the Art Critic 2.0, built from gleaming metal and sleek sensors, with equal parts smarts and snob. The sage art critic once commanded considerable power in creative spheres, making or breaking an artist's career with a simple smirk of disapproval or a punishing review in next day's paper. But today, as the number of full-time art critics dwindles in newsrooms, a growing force of high-tech art experts is starting to pick up the slack by methodically decoding art's finest details. In Canada, the Roomba-esque kulturBOT snaps photos at exhibitions and uses an algorithm-powered "stream of consciousness" to tweet out the images with often nonsensical captions like "panting with love of danger" or "streaked with the nocturnal vibration."


AI-driven robot hand spent hundred years teaching itself to rotate cube

#artificialintelligence

AI researchers have demonstrated a self-teaching algorithm that gives a robot hand remarkable new dexterity. Their creation taught itself to manipulate a cube with uncanny skill by practicing for the equivalent of a hundred years inside a computer simulation (though only a few days in real time). The robotic hand is still nowhere near as agile as a human one, and far too clumsy to be deployed in a factory or a warehouse. Even so, the research shows the potential for machine learning to unlock new robotic capabilities. It also suggests that someday robots might teach themselves new skills inside virtual worlds, which could greatly speed up the process of programming or training them.


AI-driven robot hand spent hundred years teaching itself to rotate cube

#artificialintelligence

AI researchers have demonstrated a self-teaching algorithm that gives a robot hand remarkable new dexterity. Their creation taught itself to manipulate a cube with uncanny skill by practicing for the equivalent of a hundred years inside a computer simulation (though only a few days in real time). The robotic hand is still nowhere near as agile as a human one, and far too clumsy to be deployed in a factory or a warehouse. Even so, the research shows the potential for machine learning to unlock new robotic capabilities. It also suggests that some day robots might teach themselves new skills inside virtual worlds, which could greatly speed up the process of programming or training them.