Introducing Uninformed Search with Tangible Board Games

AAAI Conferences

Researchers have established the value of hands-on learning with tangible artifacts in mathematics and related fields. Inspired by this work, an assignment was developed for an undergraduate/graduate Artificial Intelligence course to introduce students to the formal representation of search. Students analyzed a familiar board game — e.g., Rush Hour or peg solitaire — using the standard approach to modeling an uninformed search process. The assignment was well-received by students, and analysis of their work yielded unexpected insights into the challenges students face in understanding how the formal problem model interacts with search algorithms. This paper introduces the theoretical motivations for the work, analyzes student work products, and makes recommendations for future extensions.


Representation Search through Generate and Test

AAAI Conferences

Learning representations from data is one of the fundamental problems of artificial intelligence and machine learning. Many different approaches exist for learning representations, but what constitutes a good representation is not yet well understood. In this work, we view the problem of representation learning as one of learning features (e.g., hidden units of neural networks) such that performance of the underlying base system continually improves. We study an important case where learning is done fully online (i.e., on an example-by-example basis) from an unending stream of data. In the presence of an unending stream of data, the computational cost of the learning element should not grow with time and cannot be much more than that of the performance element. Few methods can be used effectively in this case. We show that a search approach to representation learning can naturally fit with this setting. In this approach good representations are searched by generating different features and then testing them for utility. We develop new representation-search methods and show that the generate-and-test approach can be utilized in a simple and effective way for learning representations. Our methods are fully online and add only a small fraction to the overall computation. They constitute an important step toward effective and inexpensive solutions to representation learning problems.


Position Paper: Representation Search through Generate and Test

AAAI Conferences

Learning representations from data is one of the fundamental problems of artificial intelligence and machine learning. Many different approaches exist for learning representations, but what constitutes a good representation is not yet well understood. In this work, we view the problem of representation learning as one of learning features (e.g., hidden units of neural networks) such that performance of the underlying base system continually improves. We study an important case where learning is done fully online (i.e., on an example-by-example basis) from an unending stream of data, and the computational cost of the learning element should not grow with time or cannot be much more than that of the performance element. Few methods can be used effectively in this case. We show that a search approach to representation learning can naturally fit with this setting. In this approach good representations are searched by generating different features and then testing them for utility. We develop new representation-search methods and show that the generate-and-test approach can be utilized in a simple and effective way for continually improving representations. Our methods are fully online and add only a small fraction to the overall computation. We believe online representation search constitutes an important step toward effective and inexpensive solutions to representation learning problems.


Anne v.d.L. Gardner

AI Magazine

The object is to bring the situation, or problem state, forward from its initial configuration to one satisfying a goal condition. For example, an initial situation might be the placement of chessmen on the board at the beginning of the game; the desired goal, any board configuration that is a checkmate; and the operators, rules for the legal moves in chess. This difference is then used to index the (forward) operator most relevant to reducing the difference. If this especially relevant operator cannot be immediately applied to the present problem state, subgoals are set up to change the problem state so that the relevant operator can be applied. After these subgoals are solved, the relevant operator is applied and the resulting, modified situation becomes a new starting point from which to solve for the original goal.


Using Prior Knowledge with Adaptive Probing

AAAI Conferences

Wheeler Ruml Division of Engineering and Applied Sciences Harvard University 33 Oxford Street Cambridge, MA 02138 ruml@eecs, harvard, edu Abstract When searching a tree to find the best leaf, complete search methods such as depth-first search and depth-boundediscrepancy search use a fixed deterministic order that may or may not be appropriate for the tree at hand. Adaptive probing is a recently-proposed stochastic method that attempts to adjust its sampling online to focus on areas of the tree that seem to contain good solutions. While effective on a variety of trees, adaptive probing wastes time learning basic features of the problem that are built into other algorithms, such as the fact that the heuristic is often helpful. In this paper, we investigate two simple methods for adding such prior knowledge to adaptive probing. The second uses a heuristically biased policy at the start of the search, gradually deferring to learned information later iterations.