gripper
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > Iowa (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > Iowa (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia (0.04)
- Asia > Indonesia > Bali (0.04)
- Research Report > Experimental Study (0.93)
- Workflow (0.68)
Vine-inspired robotic gripper gently lifts heavy and fragile objects
In the horticultural world, some vines are especially grabby. As they grow, the woody tendrils can wrap around obstacles with enough force to pull down entire fences and trees. Inspired by vines' twisty tenacity, engineers at MIT and Stanford University have developed a robotic gripper that can snake around and lift a variety of objects, including a glass vase and a watermelon, offering a gentler approach compared to conventional gripper designs. A larger version of the robo-tendrils can also safely lift a human out of bed. The new bot consists of a pressurized box, positioned near the target object, from which long, vine-like tubes inflate and grow, like socks being turned inside out.
- North America > United States > Texas (0.05)
- North America > United States > Florida > Alachua County > Gainesville (0.05)
- Health & Medicine (0.49)
- Leisure & Entertainment > Sports > Soccer (0.30)
Artificial tendons give muscle-powered robots a boost
Our muscles are nature's actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate "biohybrid robots" made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers. But for the most part, these designs are limited in the amount of motion and power they can produce.
- North America > United States > Ohio (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Switzerland > Zürich > Zürich (0.05)
Development of a Compliant Gripper for Safe Robot-Assisted Trouser Dressing-Undressing
Unde, Jayant, Inden, Takumi, Wakayama, Yuki, Colan, Jacinto, Zhu, Yaonan, Aoyama, Tadayoshi, Hasegawa, Yasuhisa
In recent years, many countries, including Japan, have rapidly aging populations, making the preservation of seniors' quality of life a significant concern. For elderly people with impaired physical abilities, support for toileting is one of the most important issues. This paper details the design, development, experimental assessment, and potential application of the gripper system, with a focus on the unique requirements and obstacles involved in aiding elderly or hemiplegic individuals in dressing and undressing trousers. The gripper we propose seeks to find the right balance between compliance and grasping forces, ensuring precise manipulation while maintaining a safe and compliant interaction with the users. The gripper's integration into a custom--built robotic manipulator system provides a comprehensive solution for assisting hemiplegic individuals in their dressing and undressing tasks. Experimental evaluations and comparisons with existing studies demonstrate the gripper's ability to successfully assist in both dressing and dressing of trousers in confined spaces with a high success rate. This research contributes to the advancement of assistive robotics, empowering elderly, and physically impaired individuals to maintain their independence and improve their quality of life.
TacFinRay: Soft Tactile Fin-Ray Finger with Indirect Tactile Sensing for Robust Grasping
Nam, Saekwang, Deng, Bowen, Lee, Loong Yi, Rossiter, Jonathan M., Lepora, Nathan F.
Abstract--We present a tactile-sensorized Fin-Ray finger that enables simultaneous detection of contact location and indentation depth through an indirect sensing approach. A hinge mechanism is integrated between the soft Fin-Ray structure and a rigid sensing module, allowing deformation and translation information to be transferred to a bottom crossbeam upon which are an array of marker-tipped pins based on the biomimetic structure of the T acTip vision-based tactile sensor . Deformation patterns captured by an internal camera are processed using a convolutional neural network to infer contact conditions without directly sensing the finger surface. The finger design was optimized by varying pin configurations and hinge orientations, achieving 0.1 mm depth and 2 mm location-sensing accuracies. The perception demonstrated robust generalization to various indenter shapes and sizes, which was applied to a pick-and-place task under uncertain picking positions, where the tactile feedback significantly improved placement accuracy. Overall, this work provides a lightweight, flexible, and scalable tactile sensing solution suitable for soft robotic structures where the sensing needs situating away from the contact interface. I. INTRODUCTION Tactile sensing is essential for achieving dexterous manipulation in robotic hands [1], [2]. For example, to perform delicate tasks like gently grasping and placing eggs or glass plates, humanoid robots such as Figure's F.02 and Tesla's Optimus will need fingertip-mounted tactile sensors to become truly capable [3]. To enhance robotic dexterity, researchers have developed vision-based tactile sensors (VBTSs) that take advantage of recent advancements in computer vision [4]-[7].
Hoi! -- A Multimodal Dataset for Force-Grounded, Cross-View Articulated Manipulation
Engelbracht, Tim, Zurbrügg, René, Wohlrapp, Matteo, Büchner, Martin, Valada, Abhinav, Pollefeys, Marc, Blum, Hermann, Bauer, Zuria
W e present a dataset for force-grounded, cross-view articulated manipulation that couples what is seen with what is done and what is felt during real human interaction. The dataset contains 3048 sequences across 381 articulated objects in 38 environments. Each object is operated under four embodiments - (i) human hand, (ii) human hand with a wrist-mounted camera, (iii) handheld UMI gripper, and (iv) a custom Hoi! gripper - where the tool embodiment provide synchronized end-effector forces and tactile sensing. Our dataset offers a holistic view of interaction understanding from video, enabling researchers to evaluate how well methods transfer between human and robotic viewpoints, but also investigate underexplored modalities such as force sensing and prediction.