Goto

Collaborating Authors

 gripper finger


Sim2Real Transfer for Vision-Based Grasp Verification

Amargant, Pau, Hönig, Peter, Vincze, Markus

arXiv.org Artificial Intelligence

-- The verification of successful grasps is a crucial aspect of robot manipulation, particularly when handling de-formable objects. In this work, we present a vision-based approach for grasp verification to determine whether the robotic gripper has successfully grasped an object. Our method employs a two-stage architecture; first a YOLO-based object detection model to detect and locate the robot's gripper and then a ResNet-based classifier determines the presence of an object. T o address the limitations of real-world data capture, we introduce HSR-GraspSynth, a synthetic dataset designed to simulate diverse grasping scenarios. Furthermore, we explore the use of Visual Question Answering capabilities as a zero-shot baseline to which we compare our model. Experimental results demonstrate that our approach achieves high accuracy in real-world environments, with potential for integration into grasping pipelines. Index T erms -- Grasp verification, Robot manipulation, De-formable objects, Vision-based grasping, YOLO object detection, ResNet classification, Synthetic dataset, Visual Question Answering. I. INTRODUCTION Deformable object manipulation is a growing field of research in robotics due to its relevance in a wide range of tasks [26].


Hybrid Gripper with Passive Pneumatic Soft Joints for Grasping Deformable Thin Objects

Tran, Ngoc-Duy, Ly, Hoang-Hiep, Nguyen, Xuan-Thuan, Mac, Thi-Thoa, Nguyen, Anh, Ta, Tung D.

arXiv.org Artificial Intelligence

Grasping a variety of objects remains a key challenge in the development of versatile robotic systems. The human hand is remarkably dexterous, capable of grasping and manipulating objects with diverse shapes, mechanical properties, and textures. Inspired by how humans use two fingers to pick up thin and large objects such as fabric or sheets of paper, we aim to develop a gripper optimized for grasping such deformable objects. Observing how the soft and flexible fingertip joints of the hand approach and grasp thin materials, a hybrid gripper design that incorporates both soft and rigid components was proposed. The gripper utilizes a soft pneumatic ring wrapped around a rigid revolute joint to create a flexible two-fingered gripper. Experiments were conducted to characterize and evaluate the gripper performance in handling sheets of paper and other objects. Compared to rigid grippers, the proposed design improves grasping efficiency and reduces the gripping distance by up to eightfold.


An in-Contact Robotic System for the Process of Desoldering PCB Components

Santos, Silvia, Marques, Lino, Neto, Pedro

arXiv.org Artificial Intelligence

The disposal and recycling of electronic waste (e-waste) is a global challenge. The disassembly of components is a crucial step towards an efficient recycling process, avoiding the destructive methods. Although most disassembly work is still done manually due to the diversity and complexity of components, there is a growing interest in developing automated methods to improve efficiency and reduce labor costs. This study aims to robotize the desoldering process and extracting components from printed circuit boards (PCBs), with the goal of automating the process as much as possible. The proposed strategy consists of several phases, including the controlled contact of the robotic tool with the PCB components. A specific tool was developed to apply a controlled force against the PCB component, removing it from the board. The results demonstrate that it is feasible to remove the PCB components with a high success rate (approximately 100% for the bigger PCB components).


Fit2Form: 3D Generative Model for Robot Gripper Form Design

Ha, Huy, Agrawal, Shubham, Song, Shuran

arXiv.org Artificial Intelligence

The 3D shape of a robot's end-effector plays a critical role in determining it's functionality and overall performance. Many industrial applications rely on task-specific gripper designs to ensure the system's robustness and accuracy. However, the process of manual hardware design is both costly and time-consuming, and the quality of the resulting design is dependent on the engineer's experience and domain expertise, which can easily be out-dated or inaccurate. The goal of this work is to use machine learning algorithms to automate the design of task-specific gripper fingers. We propose Fit2Form, a 3D generative design framework that generates pairs of finger shapes to maximize design objectives (i.e., grasp success, stability, and robustness) for target grasp objects. We model the design objectives by training a Fitness network to predict their values for pairs of gripper fingers and their corresponding grasp objects. This Fitness network then provides supervision to a 3D Generative network that produces a pair of 3D finger geometries for the target grasp object. Our experiments demonstrate that the proposed 3D generative design framework generates parallel jaw gripper finger shapes that achieve more stable and robust grasps compared to other general-purpose and task-specific gripper design algorithms. Video can be found at https://youtu.be/utKHP3qb1bg.


Lio -- A Personal Robot Assistant for Human-Robot Interaction and Care Applications

Miseikis, Justinas, Caroni, Pietro, Duchamp, Patricia, Gasser, Alina, Marko, Rastislav, Miseikiene, Nelija, Zwilling, Frederik, de Castelbajac, Charles, Eicher, Lucas, Fruh, Michael, Fruh, Hansruedi

arXiv.org Artificial Intelligence

Lio is a mobile robot platform with a multi-functional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis. Lio is intrinsically safe by having full coverage in soft artificial-leather material as well as having collision detection, limited speed and forces. Furthermore, the robot has a compliant motion controller. A combination of visual, audio, laser, ultrasound and mechanical sensors are used for safe navigation and environment understanding. The ROS-enabled setup allows researchers to access raw sensor data as well as have direct control of the robot. The friendly appearance of Lio has resulted in the robot being well accepted by health care staff and patients. Fully autonomous operation is made possible by a flexible decision engine, autonomous navigation and automatic recharging. Combined with time-scheduled task triggers, this allows Lio to operate throughout the day, with a battery life of up to 8 hours and recharging during idle times. A combination of powerful on-board computing units provides enough processing power to deploy artificial intelligence and deep learning-based solutions on-board the robot without the need to send any sensitive data to cloud services, guaranteeing compliance with privacy requirements. During the COVID-19 pandemic, Lio was rapidly adjusted to perform additional functionality like disinfection and remote elevated body temperature detection. It complies with ISO13482 - Safety requirements for personal care robots, meaning it can be directly tested and deployed in care facilities.


BionicSoftHand Festo Corporate

#artificialintelligence

Whether grasping, holding or turning, touching, typing or pressing – in everyday life, we use our hands as a matter of course for the most diverse tasks. The human hand is a true miracle tool of nature. What could be more logical than equipping robots in collaborative working spaces with a gripper that is modelled on this natural model and can learn through artificial intelligence to solve a wide variety of gripping and turning tasks? BionicSoftHand uses the method of reinforcement learning – learning by strengthening. This means that instead of having to imitate a concrete action, the hand is merely given a goal.


Video Friday: MIT's Mini Cheetah Robot, and More

IEEE Spectrum Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. Impressive new video of MIT's Mini Cheetah doing backflips, and failing to do backflips, which is even cuter. MIT'S new mini cheetah robot is the first four-legged robot to do a backflip.


Cocktail Bot 4.0

Robohub

The Cocktail Bot 4.0 consists of five robots with one high-level goal: Mix one more than 20 possible drink combination for you! But it isn't as easy as it sounds. After the customer composed his drink by combining liquor, soft drink and ice in a web interface. The robots start to mix the drink on their own. Five robot stations are preparing the order to deliver it to the guests.