Goto

Collaborating Authors

 manipulation


A multi-armed robot for assisting with agricultural tasks

Robohub

In their paper Force Aware Branch Manipulation To Assist Agricultural Tasks, which was presented at IROS 2025,, and proposed a methodology to safely manipulate branches to aid various agricultural tasks. We interviewed Madhav to find out more. Could you give us an overview of the problem you were addressing in the paper? Our work is motivated by StickBug [1], a multi-armed robotic system for precision pollination in greenhouse environments. One of the main challenges StickBug faces is that many flowers are partially or fully hidden within the plant canopy, making them difficult to detect and reach directly for pollination.


Learning Hierarchical Semantic Image Manipulation through Structured Representations

Neural Information Processing Systems

Understanding, reasoning, and manipulating semantic concepts of images have been a fundamental research problem for decades. Previous work mainly focused on direct manipulation of natural image manifold through color strokes, key-points, textures, and holes-to-fill. In this work, we present a novel hierarchical framework for semantic image manipulation. Key to our hierarchical framework is that we employ structured semantic layout as our intermediate representations for manipulation. Initialized with coarse-level bounding boxes, our layout generator first creates pixel-wise semantic layout capturing the object shape, object-object interactions, and object-scene relations. Then our image generator fills in the pixel-level textures guided by the semantic layout. Such framework allows a user to manipulate images at object-level by adding, removing, and moving one bounding box at a time. Experimental evaluations demonstrate the advantages of the hierarchical manipulation framework over existing image generation and context hole-filing models, both qualitatively and quantitatively. Benefits of the hierarchical framework are further demonstrated in applications such as semantic object manipulation, interactive image editing, and data-driven image manipulation.


Reversible, detachable robotic hand redefines dexterity

Robohub

With its opposable thumb, multiple joints and gripping skin, human hands are often considered to be the pinnacle of dexterity, and many robotic hands are designed in their image. But having been shaped by the slow process of evolution, human hands are far from optimized, with the biggest drawbacks including our single, asymmetrical thumbs and attachment to arms with limited mobility. "We can easily see the limitations of the human hand when attempting to reach objects underneath furniture or behind shelves, or performing simultaneous tasks like holding a bottle while picking up a chip can," says Aude Billard, head of the Learning Algorithms and Systems Laboratory (LASA) in EPFL's School of Engineering. "Likewise, accessing objects positioned behind the hand while keeping the grip stable can be extremely challenging, requiring awkward wrist contortions or body repositioning." A team composed of Billard, LASA researcher Xiao Gao, and Kai Junge and Josie Hughes from the Computational Robot Design and Fabrication Lab designed a robotic hand that overcomes these challenges.