6 innovative robotic grippers lend a helping hand

#artificialintelligence

OnRobot introduced new robotic grippers at Automatica 2018, including the Tactile gripper. With the collaborative robot market exploding, robotic grippers will be an area of growth and increasing competition. That was made abundantly clear at Automatica 2018 where new robotic grippers made quite a splash. While market growth has an impact on the amount of innovation taking place, Lasse Kieffer, CEO and co-founder of Purple Robotics, said a shift in mindset is also leading to new robotic grippers. "End users want a collaborative robot application.


A lizard-inspired robot gripper may solve our space-junk problems

Engadget

Space junk is a huge problem in orbit. Over 500,000 pieces of debris are currently orbiting the Earth at up to 17,500 miles per hour, and we haven't yet figured out how to clean it up. But engineers at Stanford may have made a breakthrough: They've designed a robotic gripper based on gecko's feet that works in zero-g. The end goal is to use it to clean up space junk. The problem with existing technology is that everything is designed to work at Earth's gravity, within Earth's temperature range.


Robot Arm Uses AI to Get a Better Grip

#artificialintelligence

Imagine a robotic hand that can identify, examine and handle objects autonomously, without needing a human operator to guide it. That's what SCHUNK aims to create with its line of intelligent grippers. The company has already brought to market its Co-act JL1 Gripper, which SCHUNK claims is the world's first intelligent gripping module for human-robot collaboration. These robots use AI to learn how to identify and manipulate objects--and are less reliant on a human controller to tell them what to do. SCHUNK's intelligent grippers adjust their behavior in real time depending on what it's gripping.


Video Friday: MIT's Mini Cheetah Robot, and More

IEEE Spectrum Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. Impressive new video of MIT's Mini Cheetah doing backflips, and failing to do backflips, which is even cuter. MIT'S new mini cheetah robot is the first four-legged robot to do a backflip.


Zero-Shot Skill Composition and Simulation-to-Real Transfer by Learning Task Representations

arXiv.org Artificial Intelligence

Simulation-to-real transfer is an important strategy for making reinforcement learning practical with real robots. Successful sim-to-real transfer systems have difficulty producing policies which generalize across tasks, despite training for thousands of hours equivalent real robot time. To address this shortcoming, we present a novel approach to efficiently learning new robotic skills directly on a real robot, based on model-predictive control (MPC) and an algorithm for learning task representations. In short, we show how to reuse the simulation from the pre-training step of sim-to-real methods as a tool for foresight, allowing the sim-to-real policy adapt to unseen tasks. Rather than end-to-end learning policies for single tasks and attempting to transfer them, we first use simulation to simultaneously learn (1) a continuous parameterization (i.e. a skill embedding or latent) of task-appropriate primitive skills, and (2) a single policy for these skills which is conditioned on this representation. We then directly transfer our multi-skill policy to a real robot, and actuate the robot by choosing sequences of skill latents which actuate the policy, with each latent corresponding to a pre-learned primitive skill controller. We complete unseen tasks by choosing new sequences of skill latents to control the robot using MPC, where our MPC model is composed of the pre-trained skill policy executed in the simulation environment, run in parallel with the real robot. We discuss the background and principles of our method, detail its practical implementation, and evaluate its performance by using our method to train a real Sawyer Robot to achieve motion tasks such as drawing and block pushing.