Robot-Building Lab and Contest at the 1993 National AI Conference

AI Magazine

A robot-building lab and contest was held at the Eleventh National Conference on Artificial Intelligence. Teams of three worked day and night for 72 hours to build tabletop autonomous robots of legos, a small microcontroller board, and sensors. The robots then competed head to head in two events. I was one of the developers of JACK, the second-place finisher in the Coffeepot event. This article contains my personal recollections of the lab and contest.

Reciprocal Collision Avoidance for Quadrotor Helicopters Using LQR-Obstacles

AAAI Conferences

In this paper we present a formal approach to reciprocal collision avoidance for multiple mobile robots sharing a common 2-D or 3-D workspace whose dynamics are subject to linear differential constraints. Our approach defines a protocol for robots to select their control input independently (i.e. without coordination with other robots) while guaranteeing collision-free motion for all robots, assuming the robots can perfectly observe each other's state. To this end, we use the concept of LQR-Obstacles that define sets of forbidden control inputs that lead a robot to collision with obstacles, and extend it for reciprocal collision avoidance among multiple robots. We implemented and tested our approach in 3-D simulation environments for reciprocal collision avoidance of quadrotorhelicopters, which have complex dynamics in 16-D state spaces. Our results suggest that our approach avoids collisions among over a hundred quadrotors in tight workspaces at real-time computation rates.

A Deep Learning Approach to Grasping the Invisible Artificial Intelligence

Y ang Y ang 1, Hengyue Liang 2 and Changhyun Choi 2 Abstract -- We introduce a new problem named "grasping the invisible", where a robot is tasked to grasp an initially invisible target object via a sequence of nonprehensile (e.g., pushing) and prehensile (e.g., grasping) actions. In this problem, nonprehensile actions are needed to search for the target and rearrange cluttered objects around it. We propose to solve the problem by formulating a deep reinforcement learning approach in an actor-critic format. A critic that maps both the visual observations and the target information to expected rewards of actions is learned via deep Q-learning for instance pushing and grasping. Two actors are proposed to take in the critic predictions and the domain knowledge for two subtasks: a Bayesian-based actor accounting for past experience performs explorational pushing to search for the target; once the target is found, a classifier-based actor coordinates the target-oriented pushing and grasping to grasp the target in clutter . The model is entirely self-supervised through the robot-environment interactions. Our system achieves 93% and 87% task success rate on the two subtasks in simulation and 85% task success rate in real robot experiments, which outperforms several baselines by large margins. Supplementary material is available at: Index T erms -- Dexterous Manipulation, Deep Learning in Robotics and Automation, Computer Vision for Automation I. INTRODUCTION Imagine what happens when a young kid is looking for a specific toy block buried in clutter, as shown in Figure 1a. He or she may first push down the pile of the blocks and luckily spot the target block in clutter, then push around it to make a space for the fingers (we refer to this type of motion as "singulation" [1]) and finally grasp it. We have wondered if an intelligent agent can perform such a task.

Skill Transfer in Deep Reinforcement Learning under Morphological Heterogeneity Machine Learning

Transfer learning methods for reinforcement learning (RL) domains facilitate the acquisition of new skills using previously acquired knowledge. The vast majority of existing approaches assume that the agents have the same design, e.g. same shape and action spaces. In this paper we address the problem of transferring previously acquired skills amongst morphologically different agents (MDAs). For instance, assuming that a bipedal agent has been trained to move forward, could this skill be transferred on to a one-leg hopper so as to make its training process for the same task more sample efficient? We frame this problem as one of subspace learning whereby we aim to infer latent factors representing the control mechanism that is common between MDAs. We propose a novel paired variational encoder-decoder model, PVED, that disentangles the control of MDAs into shared and agent-specific factors. The shared factors are then leveraged for skill transfer using RL. Theoretically, we derive a theorem indicating how the performance of PVED depends on the shared factors and agent morphologies. Experimentally, PVED has been extensively validated on four MuJoCo environments. We demonstrate its performance compared to a state-of-the-art approach and several ablation cases, visualize and interpret the hidden factors, and identify avenues for future improvements.

MinDART: A Multi-Robot Search & Retrieval System

AAAI Conferences

We are interested in studying how environmental and control factors affect the performance of a homogeneous multi-robot team doing a search and retrieval task. We have constructed a group of inexpensive robots called the Minnesota Distributed Autonomous Robot Team (MinDART) which use simple sensors and actuators to complete their tasks. We have upgraded these robots with the CMUCam, an inexpensive camera system that runs a color segmentation algorithm. The camera allows the robots to localize themselves as well as visually recognize other robots. We analyze how the team's performance is affected by target distribution (uniform or clumped), size of the team, and whether search with explicit localization is more beneficial than random search.