Goto

Collaborating Authors

 gripper





Vine-inspired robotic gripper gently lifts heavy and fragile objects

Robohub

In the horticultural world, some vines are especially grabby. As they grow, the woody tendrils can wrap around obstacles with enough force to pull down entire fences and trees. Inspired by vines' twisty tenacity, engineers at MIT and Stanford University have developed a robotic gripper that can snake around and lift a variety of objects, including a glass vase and a watermelon, offering a gentler approach compared to conventional gripper designs. A larger version of the robo-tendrils can also safely lift a human out of bed. The new bot consists of a pressurized box, positioned near the target object, from which long, vine-like tubes inflate and grow, like socks being turned inside out.


Artificial tendons give muscle-powered robots a boost

Robohub

Our muscles are nature's actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate "biohybrid robots" made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers. But for the most part, these designs are limited in the amount of motion and power they can produce.


Development of a Compliant Gripper for Safe Robot-Assisted Trouser Dressing-Undressing

Unde, Jayant, Inden, Takumi, Wakayama, Yuki, Colan, Jacinto, Zhu, Yaonan, Aoyama, Tadayoshi, Hasegawa, Yasuhisa

arXiv.org Artificial Intelligence

In recent years, many countries, including Japan, have rapidly aging populations, making the preservation of seniors' quality of life a significant concern. For elderly people with impaired physical abilities, support for toileting is one of the most important issues. This paper details the design, development, experimental assessment, and potential application of the gripper system, with a focus on the unique requirements and obstacles involved in aiding elderly or hemiplegic individuals in dressing and undressing trousers. The gripper we propose seeks to find the right balance between compliance and grasping forces, ensuring precise manipulation while maintaining a safe and compliant interaction with the users. The gripper's integration into a custom--built robotic manipulator system provides a comprehensive solution for assisting hemiplegic individuals in their dressing and undressing tasks. Experimental evaluations and comparisons with existing studies demonstrate the gripper's ability to successfully assist in both dressing and dressing of trousers in confined spaces with a high success rate. This research contributes to the advancement of assistive robotics, empowering elderly, and physically impaired individuals to maintain their independence and improve their quality of life.


TacFinRay: Soft Tactile Fin-Ray Finger with Indirect Tactile Sensing for Robust Grasping

Nam, Saekwang, Deng, Bowen, Lee, Loong Yi, Rossiter, Jonathan M., Lepora, Nathan F.

arXiv.org Artificial Intelligence

Abstract--We present a tactile-sensorized Fin-Ray finger that enables simultaneous detection of contact location and indentation depth through an indirect sensing approach. A hinge mechanism is integrated between the soft Fin-Ray structure and a rigid sensing module, allowing deformation and translation information to be transferred to a bottom crossbeam upon which are an array of marker-tipped pins based on the biomimetic structure of the T acTip vision-based tactile sensor . Deformation patterns captured by an internal camera are processed using a convolutional neural network to infer contact conditions without directly sensing the finger surface. The finger design was optimized by varying pin configurations and hinge orientations, achieving 0.1 mm depth and 2 mm location-sensing accuracies. The perception demonstrated robust generalization to various indenter shapes and sizes, which was applied to a pick-and-place task under uncertain picking positions, where the tactile feedback significantly improved placement accuracy. Overall, this work provides a lightweight, flexible, and scalable tactile sensing solution suitable for soft robotic structures where the sensing needs situating away from the contact interface. I. INTRODUCTION Tactile sensing is essential for achieving dexterous manipulation in robotic hands [1], [2]. For example, to perform delicate tasks like gently grasping and placing eggs or glass plates, humanoid robots such as Figure's F.02 and Tesla's Optimus will need fingertip-mounted tactile sensors to become truly capable [3]. To enhance robotic dexterity, researchers have developed vision-based tactile sensors (VBTSs) that take advantage of recent advancements in computer vision [4]-[7].


Vision-Language-Action Models for Selective Robotic Disassembly: A Case Study on Critical Component Extraction from Desktops

Liu, Chang, Tian, Sibo, Behdad, Sara, Liang, Xiao, Zheng, Minghui

arXiv.org Artificial Intelligence

Automating disassembly of critical components from end-of-life (EoL) desktops, such as high-value items like RAM modules and CPUs, as well as sensitive parts like hard disk drives, remains challenging due to the inherent variability and uncertainty of these products. Moreover, their disassembly requires sequential, precise, and dexterous operations, further increasing the complexity of automation. Current robotic disassembly processes are typically divided into several stages: perception, sequence planning, task planning, motion planning, and manipulation. Each stage requires explicit modeling, which limits generalization to unfamiliar scenarios. Recent development of vision-language-action (VLA) models has presented an end-to-end approach for general robotic manipulation tasks. Although VLAs have demonstrated promising performance on simple tasks, the feasibility of applying such models to complex disassembly remains largely unexplored. In this paper, we collected a customized dataset for robotic RAM and CPU disassembly and used it to fine-tune two well-established VLA approaches, OpenVLA and OpenVLA-OFT, as a case study. We divided the whole disassembly task into several small steps, and our preliminary experimental results indicate that the fine-tuned VLA models can faithfully complete multiple early steps but struggle with certain critical subtasks, leading to task failure. However, we observed that a simple hybrid strategy that combines VLA with a rule-based controller can successfully perform the entire disassembly operation. These findings highlight the current limitations of VLA models in handling the dexterity and precision required for robotic EoL product disassembly. By offering a detailed analysis of the observed results, this study provides insights that may inform future research to address current challenges and advance end-to-end robotic automated disassembly.


Hoi! -- A Multimodal Dataset for Force-Grounded, Cross-View Articulated Manipulation

Engelbracht, Tim, Zurbrügg, René, Wohlrapp, Matteo, Büchner, Martin, Valada, Abhinav, Pollefeys, Marc, Blum, Hermann, Bauer, Zuria

arXiv.org Artificial Intelligence

W e present a dataset for force-grounded, cross-view articulated manipulation that couples what is seen with what is done and what is felt during real human interaction. The dataset contains 3048 sequences across 381 articulated objects in 38 environments. Each object is operated under four embodiments - (i) human hand, (ii) human hand with a wrist-mounted camera, (iii) handheld UMI gripper, and (iv) a custom Hoi! gripper - where the tool embodiment provide synchronized end-effector forces and tactile sensing. Our dataset offers a holistic view of interaction understanding from video, enabling researchers to evaluate how well methods transfer between human and robotic viewpoints, but also investigate underexplored modalities such as force sensing and prediction.


A Novel Approach to Tomato Harvesting Using a Hybrid Gripper with Semantic Segmentation and Keypoint Detection

Ansari, Shahid, Gohil, Mahendra Kumar, Maeda, Yusuke, Bhattacharya, Bishakh

arXiv.org Artificial Intelligence

Precision agriculture and smart farming are increasingly adopted to improve productivity, reduce input waste, and maintain high product quality under growing demand. These approaches integrate sensing, automation, and data-driven decision-making to improve crop yield and post-harvest quality (Gupta, Abdelsalam, Khorsandroo, and Mittal (2020)). In this context, autonomous robotic harvesting is a key enabling technology for horticulture, where labor shortages and high labor costs directly affect production and consistency. Despite progress in mechanization, many conventional harvesting methods (e.g., combine harvesters, reapers, and trunk shakers) are unsuitable for soft and delicate crops such as tomatoes and strawberries because large contact forces and impacts can bruise or damage the fruit (Cho, Iida, Suguri, Masuda, and Kurita (2014); Shojaei (2021)). Selective harvesting, where fruits are picked individually at the appropriate ripeness stage, is therefore preferred for high-value crops. However, selective harvesting remains challenging because a robot must (i) detect the target fruit under occlusion, (ii) estimate its pose and identify the pedicel cutting location, and (iii) execute grasping and detachment without damaging the fruit or plant. In real cultivation environments, tomatoes are often densely packed and partially occluded by leaves and branches, making perception and reliable manipulation difficult (Chen et al. (2015)). Consequently, integrated harvesting systems that combine compliant end-effectors, robust perception, and closed-loop control remain an active research topic (Comba, Gay, Piccarolo, and Ricauda Aimonino (2010); Ling, Zhao, Gong, Liu, and Wang (2019)). A wide range of end-effectors has been explored for harvesting and handling soft produce.