Abolghasemi, Pooya


Pay attention! - Robustifying a Deep Visuomotor Policy through Task-Focused Attention

arXiv.org Artificial Intelligence

Several recent projects demonstrated the promise of end-to-end learned deep visuomotor policies for robot manipulator control. Despite impressive progress, these systems are known to be vulnerable to physical disturbances, such as accidental or adversarial bumps that make them drop the manipulated object. They also tend to be distracted by visual disturbances such as objects moving in the robot's field of view, even if the disturbance does not physically prevent the execution of the task. In this paper we propose a technique for augmenting a deep visuomotor policy trained through demonstrations with task-focused attention. The manipulation task is specified with a natural language text such as "move the red bowl to the left". This allows the attention component to concentrate on the current object that the robot needs to manipulate. We show that even in benign environments, the task focused attention allows the policy to consistently outperform a variant with no attention mechanism. More importantly, the new policy is significantly more robust: it regularly recovers from severe physical disturbances (such as bumps causing it to drop the object) from which the unmodified policy almost never recovers. In addition, we show that the proposed policy performs correctly in the presence of a wide class of visual disturbances, exhibiting a behavior reminiscent of human selective attention experiments.


Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

arXiv.org Artificial Intelligence

We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks. The controller also combines VAE-GAN-based reconstruction with autoregressive multimodal action prediction. Our results demonstrate that it is possible to learn complex manipulation tasks, such as picking up a towel, wiping an object, and depositing the towel to its previous position, entirely from raw images with direct behavior cloning. We show that weight sharing and reconstruction-based regularization substantially improve generalization and robustness, and training on multiple tasks simultaneously increases the success rate on all tasks.


From Virtual Demonstration to Real-World Manipulation Using LSTM and MDN

AAAI Conferences

Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes.


From virtual demonstration to real-world manipulation using LSTM and MDN

arXiv.org Artificial Intelligence

Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes.


Trajectory Adaptation of Robot Arms for Head-Pose Dependent Assistive Tasks

AAAI Conferences

Assistive robots promise to increase the autonomy of disabled or elderly people by facilitating the performance of Activities of Daily Living (ADLs). Learning from Demonstration (LfD) has emerged as one of the most promising approaches for teaching robots tasks that are difficult to formalize. LfD learns by requiring the operator to demonstrate one or several times the execution of the task on the given hardware. Unfortunately, many ADLs such as personal grooming, feeding or reading depend on the head pose of the assisted human. Trajectories learned using LfD would become useless or dangerous if applied naively in a situation with a different head pose. In this paper we propose and experimentally validate a method to adapt the trajectories learned using LfD to the current head pose (position and orientation) and movement of the head of the assisted user.


A Real-Time Technique for Positioning a Wheelchair-Mounted Robotic Arm for Household Manipulation Tasks

AAAI Conferences

Wheelchair mounted robotic arms can help people with disabilities perform their activities of daily living (ADL). The autonomy of such a system can range from full manual control (both wheelchair and robotic arm controlled by the human) to fully autonomous (with both the wheelchair and the robotic arm under autonomous control). Many ADLs require the robot to pick up an object from a cluttered environment - such as a glass of water from a table where several other objects exist. In this paper, we concentrate on the task of finding the optimal position of the base of the robotic arm (which is normally a rigid point on the wheelchair) such that the end effector can easily reach the target (regardless whether this is done through human or robot control). We introduce the ease-of-reach score ERS, a metric quantifying the preferences for the positioning of the base. As the brute force computation of ERS is computationally expensive, we propose an approach of estimating the ERS through a mixture of Gaussians. The parameters of the component Gaussians are learned offline and depend on the nature of the environment such as properties of the the obstacles. Simulation results show that the estimated ERS closely matches the actual value and the speed of estimation is fast enough for real-time operation.