Goto

Collaborating Authors

 Stolkin, Rustam


Geometrically-Aware One-Shot Skill Transfer of Category-Level Objects

arXiv.org Artificial Intelligence

Robotic manipulation of unfamiliar objects in new environments is challenging and requires extensive training or laborious pre-programming. We propose a new skill transfer framework, which enables a robot to transfer complex object manipulation skills and constraints from a single human demonstration. Our approach addresses the challenge of skill acquisition and task execution by deriving geometric representations from demonstrations focusing on object-centric interactions. By leveraging the Functional Maps (FM) framework, we efficiently map interaction functions between objects and their environments, allowing the robot to replicate task operations across objects of similar topologies or categories, even when they have significantly different shapes. Additionally, our method incorporates a Task-Space Imitation Algorithm (TSIA) which generates smooth, geometrically-aware robot paths to ensure the transferred skills adhere to the demonstrated task constraints. We validate the effectiveness and adaptability of our approach through extensive experiments, demonstrating successful skill transfer and task execution in diverse real-world environments without requiring additional training.


A Framework for Semantics-based Situational Awareness during Mobile Robot Deployments

arXiv.org Artificial Intelligence

--Deployment of robots into hazardous environments typically involves a "Human-Robot T eaming" (HRT) paradigm, in which a human supervisor interacts with a remotely operating robot inside the hazardous zone. Situational A wareness (SA) is vital for enabling HRT, to support navigation, planning, and decision-making. This paper explores issues of higher-level "semantic" information and understanding in SA. In semi-autonomous, or variable-autonomy paradigms, different types of semantic information may be important, in different ways, for both the human operator and an autonomous agent controlling the robot. We propose a generalizable framework for acquiring and combining multiple modalities of semantic-level SA during remote deployments of mobile robots. We demonstrate the framework with an example application of search and rescue (SAR) in disaster response robotics. We propose a set of "environment semantic indicators" that can reflect a variety of different types of semantic information, e.g. Based on these indicators, we propose a metric to describe the overall situation of the environment called "Situational Semantic Richness (SSR)". This metric combines multiple semantic indicators to summarise the overall situation. The SSR indicates if an information-rich and complex situation has been encountered, which may require advanced reasoning for robots and humans and hence the attention of the expert human operator . The framework is tested on a Jackal robot in a mock-up disaster response environment. Experimental results demonstrate that the proposed semantic indicators are sensitive to changes in different modalities of semantic information in different scenes, and the SSR metric reflects overall semantic changes in the situations encountered. Situational A wareness (SA) is vital for robots deployed in the field to function with sufficient autonomy, resiliency, and robustness.


The ATTUNE model for Artificial Trust Towards Human Operators

arXiv.org Artificial Intelligence

This paper presents a novel method to quantify Trust in HRI. It proposes an HRI framework for estimating the Robot Trust towards the Human in the context of a narrow and specified task. The framework produces a real-time estimation of an AI agent's Artificial Trust towards a Human partner interacting with a mobile teleoperation robot. The approach for the framework is based on principles drawn from Theory of Mind, including information about the human state, action, and intent. The framework creates the ATTUNE model for Artificial Trust Towards Human Operators. The model uses metrics on the operator's state of attention, navigational intent, actions, and performance to quantify the Trust towards them. The model is tested on a pre-existing dataset that includes recordings (ROSbags) of a human trial in a simulated disaster response scenario. The performance of ATTUNE is evaluated through a qualitative and quantitative analysis. The results of the analyses provide insight into the next stages of the research and help refine the proposed approach.


Self-supervised cross-modality learning for uncertainty-aware object detection and recognition in applications which lack pre-labelled training data

arXiv.org Artificial Intelligence

This paper shows how an uncertainty-aware, deep neural network can be trained to detect, recognise and localise objects in 2D RGB images, in applications lacking annotated train-ng datasets. We propose a self-supervising teacher-student pipeline, in which a relatively simple teacher classifier, trained with only a few labelled 2D thumbnails, automatically processes a larger body of unlabelled RGB-D data to teach a student network based on a modified YOLOv3 architecture. Firstly, 3D object detection with back projection is used to automatically extract and teach 2D detection and localisation information to the student network. Secondly, a weakly supervised 2D thumbnail classifier, with minimal training on a small number of hand-labelled images, is used to teach object category recognition. Thirdly, we use a Gaussian Process GP to encode and teach a robust uncertainty estimation functionality, so that the student can output confidence scores with each categorization. The resulting student significantly outperforms the same YOLO architecture trained directly on the same amount of labelled data. Our GP-based approach yields robust and meaningful uncertainty estimations for complex industrial object classifications. The end-to-end network is also capable of real-time processing, needed for robotics applications. Our method can be applied to many important industrial tasks, where labelled datasets are typically unavailable. In this paper, we demonstrate an example of detection, localisation, and object category recognition of nuclear mixed-waste materials in highly cluttered and unstructured scenes. This is critical for robotic sorting and handling of legacy nuclear waste, which poses complex environmental remediation challenges in many nuclearised nations.


Semi-autonomous Robotic Disassembly Enhanced by Mixed Reality

arXiv.org Artificial Intelligence

In this study, we introduce "SARDiM," a modular semi-autonomous platform enhanced with mixed reality for industrial disassembly tasks. Through a case study focused on EV battery disassembly, SARDiM integrates Mixed Reality, object segmentation, teleoperation, force feedback, and variable autonomy. Utilising the ROS, Unity, and MATLAB platforms, alongside a joint impedance controller, SARDiM facilitates teleoperated disassembly. The approach combines FastSAM for real-time object segmentation, generating data which is subsequently processed through a cluster analysis algorithm to determine the centroid and orientation of the components, categorizing them by size and disassembly priority. This data guides the MoveIt platform in trajectory planning for the Franka Robot arm. SARDiM provides the capability to switch between two teleoperation modes: manual and semi-autonomous with variable autonomy. Each was evaluated using four different Interface Methods (IM): direct view, monitor feed, mixed reality with monitor feed, and point cloud mixed reality. Evaluations across the eight IMs demonstrated a 40.61% decrease in joint limit violations using Mode 2. Moreover, Mode 2-IM4 outperformed Mode 1-IM1 by achieving a 2.33%-time reduction while considerably increasing safety, making it optimal for operating in hazardous environments at a safe distance, with the same ease of use as teleoperation with a direct view of the environment.


Learning effects in variable autonomy human-robot systems: how much training is enough?

arXiv.org Artificial Intelligence

This paper investigates learning effects and human operator training practices in variable autonomy robotic systems. These factors are known to affect performance of a human-robot system and are frequently overlooked. We present the results from an experiment inspired by a search and rescue scenario in which operators remotely controlled a mobile robot with either Human-Initiative (HI) or Mixed-Initiative (MI) control. Evidence suggests learning in terms of primary navigation task and secondary (distractor) task performance. Further evidence is provided that MI and HI performance in a pure navigation task is equal. Lastly, guidelines are proposed for experimental design and operator training practices.


Imitation learning for sim-to-real transfer of robotic cutting policies based on residual Gaussian process disturbance force model

arXiv.org Artificial Intelligence

Robotic cutting, or milling, plays a significant role in applications such as disassembly, decommissioning, and demolition. Planning and control of cutting in real-world scenarios in uncertain environments is a complex task, with the potential to benefit from simulated training environments. This letter focuses on sim-to-real transfer for robotic cutting policies, addressing the need for effective policy transfer from simulation to practical implementation. We extend our previous domain generalisation approach to learning cutting tasks based on a mechanistic model-based simulation framework, by proposing a hybrid approach for sim-to-real transfer based on a milling process force model and residual Gaussian process (GP) force model, learned from either single or multiple real-world cutting force examples. We demonstrate successful sim-to-real transfer of a robotic cutting policy without the need for fine-tuning on the real robot setup. The proposed approach autonomously adapts to materials with differing structural and mechanical properties. Furthermore, we demonstrate the proposed method outperforms fine-tuning or re-training alone.


Learning robotic milling strategies based on passive variable operational space interaction control

arXiv.org Artificial Intelligence

Abstract--This paper addresses the problem of robotic cutting a milling task online without user assistance. We develop a during disassembly of products for materials separation and framework for controlling a robot using this strategy that allows recycling. Waste handling applications differ from milling in the stiffness of the robot arm to be modulated over time to best manufacturing processes, as they engender considerable variety satisfy metrics of productivity (e.g. To address this challenge, we propose (e.g. by avoiding force limits), similarly to how a human operator a learning-based approach incorporating elements of interaction can vary muscular tension to accomplish different tasks. We control, in which the robot can adapt key parameters, such posit that the proposed method can substitute a trial-and-error as feed rate, depth of cut, and mechanical compliance during strategy of selecting process parameters for disassembly of novel task execution. We show how a mathematical model of cutting products, or integrated with existing planning approaches to mechanics, embedded in a simulation environment, can be used adjust the parameters of milling tasks online. The simulation approach control, passivity-based control, energy tank was validated on a real robot setup based on four case study materials with varying structural and mechanical properties.


Haptic-guided assisted telemanipulation approach for grasping desired objects from heaps

arXiv.org Artificial Intelligence

This paper presents an assisted telemanipulation framework for reaching and grasping desired objects from clutter. Specifically, the developed system allows an operator to select an object from a cluttered heap and effortlessly grasp it, with the system assisting in selecting the best grasp and guiding the operator to reach it. To this end, we propose an object pose estimation scheme, a dynamic grasp re-ranking strategy, and a reach-to-grasp hybrid force/position trajectory guidance controller. We integrate them, along with our previous SpectGRASP grasp planner, into a classical bilateral teleoperation system that allows to control the robot using a haptic device while providing force feedback to the operator. For a user-selected object, our system first identifies the object in the heap and estimates its full six degrees of freedom (DoF) pose. Then, SpectGRASP generates a set of ordered, collision-free grasps for this object. Based on the current location of the robot gripper, the proposed grasp re-ranking strategy dynamically updates the best grasp. In assisted mode, the hybrid controller generates a zero force-torque path along the reach-to-grasp trajectory while automatically controlling the orientation of the robot. We conducted real-world experiments using a haptic device and a 7-DoF cobot with a 2-finger gripper to validate individual components of our telemanipulation system and its overall functionality. Obtained results demonstrate the effectiveness of our system in assisting humans to clear cluttered scenes.


Local Region-to-Region Mapping-based Approach to Classify Articulated Objects

arXiv.org Artificial Intelligence

Autonomous robots operating in real-world environments encounter a variety of objects that can be both rigid and articulated in nature. Having knowledge of these specific object properties not only helps in designing appropriate manipulation strategies but also aids in developing reliable tracking and pose estimation techniques for many robotic and vision applications. In this context, this paper presents a registration-based local region-to-region mapping approach to classify an object as either articulated or rigid. Using the point clouds of the intended object, the proposed method performs classification by estimating unique local transformations between point clouds over the observed sequence of movements of the object. The significant advantage of the proposed method is that it is a constraint-free approach that can classify any articulated object and is not limited to a specific type of articulation. Additionally, it is a model-free approach with no learning components, which means it can classify whether an object is articulated without requiring any object models or labelled data. We analyze the performance of the proposed method on two publicly available benchmark datasets with a combination of articulated and rigid objects. It is observed that the proposed method can classify articulated and rigid objects with good accuracy.