robot hand
Appendix 367 A Implementation Details
W e are also committed to releasing the code. Implementation details for Stage 2. Our implementation strictly follows the previous work that also In this section, we briefly introduce our tasks. It requires the robot hand to open the door on the table. It requires the robot hand to orient the pen to the target orientation. It requires the robot hand to place the object on the table into the mug. We present the success rates of our six task categories as in Table 1.
This robot hand can detach from its arm and crawl around
Breakthroughs, discoveries, and DIY tips sent six days a week. Engineers in Switzerland recently created a detachable, spider-like robot hand capable of grabbing multiple objects and using its fingers to crawl. The unsettling device, reminiscent of a threatening video game creature, can separate itself from a mounted robot arm, tip-toe (or really, tip-) its way toward small objects, pick them up, and carry them on its back. The symmetrical design and flexible fingers mean that the robot can transport objects on either side of its body. For humans, that would look like holding a ball in your palm while simultaneously grasping a piece of fruit on the back of your hand.
- North America > United States > New York (0.05)
- Europe > Switzerland > Vaud > Lausanne (0.05)
OSMO: Open-Source Tactile Glove for Human-to-Robot Skill Transfer
Yin, Jessica, Qi, Haozhi, Wi, Youngsun, Kundu, Sayantan, Lambeta, Mike, Yang, William, Wang, Changhao, Wu, Tingfan, Malik, Jitendra, Hellebrekers, Tess
Abstract-- Human video demonstrations provide abundant training data for learning robot policies, but video alone cannot capture the rich contact signals critical for mastering manipulation. We introduce OSMO, an open-source wearable tactile glove designed for human-to-robot skill transfer . The glove features 12 three-axis tactile sensors across the fingertips and palm and is designed to be compatible with state-of-the-art hand-tracking methods for in-the-wild data collection. We demonstrate that a robot policy trained exclusively on human demonstrations collected with OSMO, without any real robot data, is capable of executing a challenging contact-rich manipulation task. On a real-world wiping task requiring sustained contact pressure, our tactile-aware policy achieves a 72% success rate, outperforming vision-only baselines by eliminating contact-related failure modes. We release complete hardware designs, firmware, and assembly instructions to support community adoption. Tactile sensing enables humans to excel at manipulation by providing real-time feedback about contact forces that vision alone cannot capture. Consider trying to dice a carrot from video alone; one cannot observe the nuanced force control that makes the task successful. Many different applied forces can result in nearly identical visual appearances, leaving critical information about force control invisible to vision.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Michigan (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
Teen designs and builds a robotic hand with only LEGOs
At only 16, Jared Lepora has also co-authored a paper. Breakthroughs, discoveries, and DIY tips sent every weekday. In October, a student presented a robotic hand made entirely from LEGOs at the 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems in Hangzhou, China. Nonetheless, the 16-year-old co-authored research recently published on arXiv along with colleagues including his father Nathan Lepora, a professor of robotics and artificial intelligence at the University of Bristol. Jared used LEGO MINDSTORMS, a LEGO robotics kit, to build a LEGO version of SoftHand-A, a 3D-printed anthropomorphic robot hand introduced in an earlier study .
- Asia > China > Zhejiang Province > Hangzhou (0.25)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)
CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object
As you read this, it's holding your phone or clicking your mouse with seemingly effortless grace. With over 20 degrees of freedom, human hands possess extraordinary dexterity, which can grip a heavy hammer, rotate a screwdriver, or instantly adjust when something slips. Executing complex tasks like key rotation, scissor use, and surgical procedures that are impossible with simple grippers. Their similarity to human hands makes them ideal for learning from vast human demonstration data. Despite this potential, most current robots still rely on simple "grippers" due to the difficulties of dexterous manipulation.
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > Michigan (0.05)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.05)
Adversarial Game-Theoretic Algorithm for Dexterous Grasp Synthesis
Chen, Yu, He, Botao, Mao, Yuemin, Jakobsson, Arthur, Ke, Jeffrey, Aloimonos, Yiannis, Shi, Guanya, Choset, Howie, Mao, Jiayuan, Ichnowski, Jeffrey
For many complex tasks, multi-finger robot hands are poised to revolutionize how we interact with the world, but reliably grasping objects remains a significant challenge. We focus on the problem of synthesizing grasps for multi-finger robot hands that, given a target object's geometry and pose, computes a hand configuration. Existing approaches often struggle to produce reliable grasps that sufficiently constrain object motion, leading to instability under disturbances and failed grasps. A key reason is that during grasp generation, they typically focus on resisting a single wrench, while ignoring the object's potential for adversarial movements, such as escaping. We propose a new grasp-synthesis approach that explicitly captures and leverages the adversarial object motion in grasp generation by formulating the problem as a two-player game. One player controls the robot to generate feasible grasp configurations, while the other adversarially controls the object to seek motions that attempt to escape from the grasp. Simulation experiments on various robot platforms and target objects show that our approach achieves a success rate of 75.78%, up to 19.61% higher than the state-of-the-art baseline. The two-player game mechanism improves the grasping success rate by 27.40% over the method without the game formulation. Our approach requires only 0.28-1.04 seconds on average to generate a grasp configuration, depending on the robot platform, making it suitable for real-world deployment. In real-world experiments, our approach achieves an average success rate of 85.0% on ShadowHand and 87.5% on LeapHand, which confirms its feasibility and effectiveness in real robot setups.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland (0.04)
Scaling Cross-Embodiment World Models for Dexterous Manipulation
He, Zihao, Ai, Bo, Mu, Tongzhou, Liu, Yulin, Wan, Weikang, Fu, Jiawei, Du, Yilun, Christensen, Henrik I., Su, Hao
Cross-embodiment learning seeks to build generalist robots that operate across diverse morphologies, but differences in action spaces and kinematics hinder data sharing and policy transfer. This raises a central question: Is there any invariance that allows actions to transfer across embodiments? We conjecture that environment dynamics are embodiment-invariant, and that world models capturing these dynamics can provide a unified interface across embodiments. To learn such a unified world model, the crucial step is to design state and action representations that abstract away embodiment-specific details while preserving control relevance. To this end, we represent different embodiments (e.g., human hands and robot hands) as sets of 3D particles and define actions as particle displacements, creating a shared representation for heterogeneous data and control problems. A graph-based world model is then trained on exploration data from diverse simulated robot hands and real human hands, and integrated with model-based planning for deployment on novel hardware. Experiments on rigid and deformable manipulation tasks reveal three findings: (i) scaling to more training embodiments improves generalization to unseen ones, (ii) co-training on both simulated and real data outperforms training on either alone, and (iii) the learned models enable effective control on robots with varied degrees of freedom. These results establish world models as a promising interface for cross-embodiment dexterous manipulation.
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (2 more...)
Dexterous Robotic Piano Playing at Scale
Chen, Le, Zhao, Yi, Schneider, Jan, Gao, Quankai, Guist, Simon, Qian, Cheng, Kannala, Juho, Schölkopf, Bernhard, Pajarinen, Joni, Büchler, Dieter
This work has been submitted to the IEEE for possible publication. Abstract--Endowing robot hands with human-level dexterity has been a long-standing goal in robotics. Bimanual robotic piano playing represents a particularly challenging task: it is high-dimensional, contact-rich, and requires fast, precise control. Our approach is built on three core components. First, we introduce an automatic fingering strategy based on Optimal Transport (OT), allowing the agent to autonomously discover efficient piano-playing strategies from scratch without demonstrations. Second, we conduct large-scale Reinforcement Learning (RL) by training more than 2,000 agents, each specialized in distinct music pieces, and aggregate their experience into a dataset named RP1M++, consisting of over one million trajectories for robotic piano playing. Extensive experiments and ablation studies highlight the effectiveness and scalability of our approach, advancing dexterous robotic piano playing at scale. Achieving human-level dexterity remains one of the central challenges in robotics. The difficulty stems from the breadth of challenges ranging from contact-rich manipulation to dynamic athletic tasks, each posing distinct demands. Manipulation tasks, such as grasping or reorienting objects [1], require sustained application of appropriate forces at moderate speeds across objects with diverse shapes, materials, and weight distributions. Dynamic tasks, such as juggling [2] or table tennis [3], involve frequent contact changes, demand high precision, and allow little tolerance for error due to the rarity of contact opportunities. The combination of requiring both precision and speed makes reproducing human-level dexterity particularly challenging. Q. Gao is with the University of Southern California, CA 90007, United States (e-mail: quankaig@usc.edu). Q. Cheng is with Imperial College London, SW7 2AZ, London, United Kingdom (e-mail: c.qian24@imperial.ac.uk). J. Kannala is with the University of Oulu, 90570 Oulu, Finland. D. B uchler is also with the University of Alberta (Canada), the Alberta Machine Intelligence Institute (Amii), & holds a Canada CIFAR AI Chair.
- North America > Canada > Alberta (0.74)
- North America > United States > California (0.54)
- Europe > Finland > Northern Ostrobothnia > Oulu (0.44)
- (8 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Education > Educational Setting > Higher Education (0.54)
- Information Technology > Artificial Intelligence > Robots > Manipulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
Teenager builds advanced robot hand entirely from Lego pieces
A robot hand built from Lego pieces by a 16-year-old and his father can grab and move objects, displaying similar qualities to a leading robotic hand. Jared Lepora, a student at Bristol Grammar School, UK, began developing the hand when he was 14 with his father, Nathan Lepora, who works at the University of Bristol. The device borrows principles from cutting-edge robotic hands, including the Pisa/IIT SoftHand, but uses only off-the-shelf parts from Lego Mindstorms, a line of educational kits for building programmable robots. "My dad's a professor at Bristol University for robotics, and I really liked the designs [of robotic hands]," says Jared. "It just inspired me to do it in an educational format and out of Lego." The hand is driven by two motors using tendons, and each of its four fingers has three joints.
Educational SoftHand-A: Building an Anthropomorphic Hand with Soft Synergies using LEGO MINDSTORMS
Lepora, Jared K., Li, Haoran, Psomopoulou, Efi, Lepora, Nathan F.
Abstract-- This paper introduces an anthropomorphic robot hand built entirely using LEGO MINDSTORMS: the Educational SoftHand-A, a tendon-driven, highly-underactuated robot hand based on the Pisa/IIT SoftHand and related hands. T o be suitable for an educational context, the design is constrained to use only standard LEGO pieces with tests using common equipment available at home. The hand features dual motors driving an agonist/antagonist opposing pair of tendons on each finger, which are shown to result in reactive fine control. The finger motions are synchonized through soft synergies, implemented with a differential mechanism using clutch gears. Altogether, this design results in an anthropomorphic hand that can adaptively grasp a broad range of objects using a simple actuation and control mechanism. Since the hand can be constructed from LEGO pieces and uses state-of-the-art design concepts for robotic hands, it has the potential to educate and inspire children to learn about the frontiers of modern robotics.