Color segmentation is a challenging subtask in computer vision. Most popular approaches are computationally expensive, involve an extensive off-line training phase and/or rely on a stationary camera. This paper presents an approach for color learning on-board a legged robot with limited computational and memory resources. A key defining feature of the approach is that it works without any labeled training data. Rather, it trains autonomously from a color-coded model of its environment. The process is fully implemented, completely autonomous, and provides high degree of segmentation accuracy.
Calliope is an open source mobile robot designed in the Tekkotsu Lab at Carnegie Mellon University in collaboration with RoPro Design, Inc. The Calliope5SP model features an iRobot Create base, an ASUS netbook, a 5-degree of freedom arm with a gripper with two independently controllable fingers, and a Sony PlayStation Eye camera and Robotis AX-S1 IR rangefinder on a pan/tilt mount. We use chess as a test of Calliope’s abilities. Since Calliope is a mobile platform we consider how problems in vision and localization directly impact the performance of manipulation. Calliope’s arm is too short to reach across the entire chessboard. The robot must therefore navigate to a location that provides the best position to access the pieces it wants to move. The robot proved capable of performing small-scale manipulation tasks that require careful positioning.
Ferguson, Michael (University at Albany, State University of New York) | Gero, Kim (University at Albany, State University of New York) | Salles, Joao (University at Albany, State University of New York) | Weis, James (University at Albany, State University of New York)
This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts.