Goto

Collaborating Authors

 Gupta, Satyandra K.


Robotic Compliant Object Prying Using Diffusion Policy Guided by Vision and Force Observations

arXiv.org Artificial Intelligence

The growing adoption of batteries in the electric vehicle industry and various consumer products has created an urgent need for effective recycling solutions. These products often contain a mix of compliant and rigid components, making robotic disassembly a critical step toward achieving scalable recycling processes. Diffusion policy has emerged as a promising approach for learning low-level skills in robotics. To effectively apply diffusion policy to contact-rich tasks, incorporating force as feedback is essential. In this paper, we apply diffusion policy with vision and force in a compliant object prying task. However, when combining low-dimensional contact force with high-dimensional image, the force information may be diluted. To address this issue, we propose a method that effectively integrates force with image data for diffusion policy observations. We validate our approach on a battery prying task that demands high precision and multi-step execution. Our model achieves a 96\% success rate in diverse scenarios, marking a 57\% improvement over the vision-only baseline. Our method also demonstrates zero-shot transfer capability to handle unseen objects and battery types. Supplementary videos and implementation codes are available on our project website. https://rros-lab.github.io/diffusion-with-force.github.io/


Hierarchical Optimization-based Control for Whole-body Loco-manipulation of Heavy Objects

arXiv.org Artificial Intelligence

In recent years, the field of legged robotics has seen growing interest in enhancing the capabilities of these robots through the integration of articulated robotic arms. However, achieving successful loco-manipulation, especially involving interaction with heavy objects, is far from straightforward, as object manipulation can introduce substantial disturbances that impact the robot's locomotion. This paper presents a novel framework for legged loco-manipulation that considers whole-body coordination through a hierarchical optimization-based control framework. First, an online manipulation planner computes the manipulation forces and manipulated object task-based reference trajectory. Then, pose optimization aligns the robot's trajectory with kinematic constraints. The resultant robot reference trajectory is executed via a linear MPC controller incorporating the desired manipulation forces into its prediction model. Our approach has been validated in simulation and hardware experiments, highlighting the necessity of whole-body optimization compared to the baseline locomotion MPC when interacting with heavy objects. Experimental results with Unitree Aliengo, equipped with a custom-made robotic arm, showcase its ability to successfully lift and carry an 8kg payload and manipulate doors.


Human-Supervised Semi-Autonomous Mobile Manipulators for Safely and Efficiently Executing Machine Tending Tasks

arXiv.org Artificial Intelligence

Mobile manipulators can be used for machine tending and material handling tasks in small volume manufacturing applications. These applications usually have semi-structured work environment. The use of a fully autonomous mobile manipulator for such applications can be risky, as an inaccurate model of the workspace may result in damage to expensive equipment. On the other hand, the use of a fully teleoperated mobile manipulator may require a significant amount of operator time. In this paper, a semi-autonomous mobile manipulator is developed for safely and efficiently carrying out machine tending tasks under human supervision. The robot is capable of generating motion plans from the high-level task description and presenting simulation results to the human for approval. The human operator can authorize the robot to execute the automatically generated plan or provide additional input to the planner to refine the plan. If the level of uncertainty in some parts of the workspace model is high, then the human can decide to perform teleoperation to safely execute the task. Our preliminary user trials show that non-expert operators can quickly learn to use the system and perform machine tending tasks.


An Alert-Generation Framework for Improving Resiliency in Human-Supervised, Multi-Agent Teams

arXiv.org Artificial Intelligence

Human-supervision in multi-agent teams is a critical requirement to ensure that the decision-maker's risk preferences are utilized to assign tasks to robots. In stressful complex missions that pose risk to human health and life, such as humanitarian-assistance and disaster-relief missions, human mistakes or delays in tasking robots can adversely affect the mission. To assist human decision making in such missions, we present an alert-generation framework capable of detecting various modes of potential failure or performance degradation. We demonstrate that our framework, based on state machine simulation and formal methods, offers probabilistic modeling to estimate the likelihood of unfavorable events. We introduce smart simulation that offers a computationally-efficient way of detecting low-probability situations compared to standard Monte-Carlo simulations. Moreover, for certain class of problems, our inference-based method can provide guarantees on correctly detecting task failures. Introduction With the advancement of robotic systems and artificial intelligence, there is a growing interest in more-intelligent, multi-agent teams working collaboratively to accomplish missions. These teams show especially great promise in dull, dirty, and dangerous applications, such as military operations and humanitarian-assistance and disaster-relief (HA/DR) efforts (Gregory et al. 2016). Despite the widespread use and ever-increasing capabilities of robotic systems, researchers anticipate that human team members will continue to be necessary - and not be replaced by technology - because of various advantages, including diverse expertise, adaptive decision-making, and the potential for synergy (DeCostanza et al. 2018). More importantly, HA/DR missions involve tasks with literal life-or-death consequences and so human-in-the-loop operations are mandatory to ensure proper management of resources and critical decision-making authority.


Toward Estimating Task Execution Confidence for Robotic Bin-Picking Applications

AAAI Conferences

We present an approach geared toward estimating task execution confidence for robotic bin-picking applications. This requires estimating execution confidence for all constituent subtasks including part recognition and pose estimation, singulation, transport, and fine positioning. This paper is focussed on computing associated confidence parameters for the part recognition and pose estimation subtask. In particular, our approach allows a robot to evaluate how good the part recognition and pose estimation is, based on a confidence-measure, and thereby determine whether to proceed with the task execution (part singulation) or to request help from a human in order to resolve the associated failure. The value of a mean-square distance metric at a local minimum where the part matching solution is found is used as a surrogate for the confidence parameter. Experiments with a Baxter robot are used illustrate our approach.


Towards Integrating Hierarchical Goal Networks and Motion Planners to Support Planning for Human Robot Collaboration in Assembly Cells

AAAI Conferences

Low-level motion planning techniques must be combined with high-level task planning formalisms in order to generate realistic plans that can be carried out by humans and robots. Previous attempts to integrate these two planning formalisms mostly used either Classical Planning or HTN Planning. Recently, we developed Hierarchical Goal Networks (HGNs), a new hierarchical planning formalism that combines the advantages of HTN and Classical planning, while mitigating some of the disadvantages of each individual formalism. In this paper, we describe our ongoing research on designing a planning formalism and algorithm that exploits the unique features of HGNs to better integrate task and motion planning. We also describe how the proposed planning framework can be instantiated to solve assembly planning problems involving human-robot teams.