Color segmentation is a challenging subtask in computer vision. Most popular approaches are computationally expensive, involve an extensive off-line training phase and/or rely on a stationary camera. This paper presents an approach for color learning on-board a legged robot with limited computational and memory resources. A key defining feature of the approach is that it works without any labeled training data. Rather, it trains autonomously from a color-coded model of its environment. The process is fully implemented, completely autonomous, and provides high degree of segmentation accuracy.
Tekkotsu is a free, open source software framework for high-level robot programming. We describe enhancements to Tekkotsu's navigation component, the Pilot, to incorporate a particle filter for localization and an RRT-based path planner for obstacle avoidance. This allows us to largely automate the robot's navigation behavior using a combination of odometry and landmark-based localization. Beginning robot programmers need only indicate a destination in Tekkotsu's world map and the Pilot will take the robot there. The software has been tested both in simulation and on Calliope, a new educational robot developed in the Tekkotsu lab in collaboration with RoPro Design, Inc..
The Tekkotsu "crew" is a collection of interacting software components designed to relieve a programmer of much of the burden of specifying low-level robot behaviors. Using this abstract approach to robot programming we can teach beginning roboticists to develop interesting robot applications with relatively little effort.
Search algorithms such as Rapidly-exploring Random Trees (RRTs) are common in robot programming. Including graphical representations of the output of these algorithms in a robotics framework can make the algorithms more accessible to students, and can also help programmers analyze and account for unexpected results. For this project, we used the Tekkotsu open source robot programming framework, available at Tekkotsu.org. We extended Tekkotsu's graphical user interface for displaying vision data and maps to also display the output of an RRT search. We created several demos using two types of searches: one from a navigation path planner, and one from an arm path planner. In some cases the search had no solution, and the graphical output helped to illustrate why. This confirms the utility of the RRT visualization for explaining unexpected search results. We expect that this tool will also contribute to improved student understanding of the search algorithm.
Calliope is an open source mobile robot designed in the Tekkotsu Lab at Carnegie Mellon University in collaboration with RoPro Design, Inc. The Calliope5SP model features an iRobot Create base, an ASUS netbook, a 5-degree of freedom arm with a gripper with two independently controllable fingers, and a Sony PlayStation Eye camera and Robotis AX-S1 IR rangefinder on a pan/tilt mount. We use chess as a test of Calliope’s abilities. Since Calliope is a mobile platform we consider how problems in vision and localization directly impact the performance of manipulation. Calliope’s arm is too short to reach across the entire chessboard. The robot must therefore navigate to a location that provides the best position to access the pieces it wants to move. The robot proved capable of performing small-scale manipulation tasks that require careful positioning.