Machine Learning
Case-Based Reasoning
The 1994 Workshop on Case-Based Reasoning (CBR) focused on the evaluation of CBR theories, models, systems, and system components. The CBR community addressed the evaluation of theories and implemented systems, with the consensus that a balance between novel innovations and evaluations could maximize progress.
CHINOOK The World Man-Machine Checkers Champion
Schaeffer, Jonathan, Lake, Robert, Lu, Paul, Bryant, Martin
In 1992, the seemingly unbeatable World Checker Champion Marion Tinsley defended his title against the computer program CHINOOK. After an intense, tightly contested match, Tinsley fought back from behind to win the match by scoring four wins to CHINOOK's two, with 33 draws. This match was the first time in history that a human world champion defended his title against a computer. This article reports on the progress of the checkers (8 3 8 draughts) program CHINOOK since 1992. Two years of research and development on the program culminated in a rematch with Tinsley in August 1994. In this match, after six games (all draws), Tinsley withdrew from the match and relinquished the world championship title to CHINOOK,citing health concerns. CHINOOK has since defended its title in two subsequent matches. It is the first time in history that a computer has won a human-world championship.
The 1995 Fall Symposia Series
Cohn, David, Lewis, David, Aha, David W., Burke, Robin, Srihari, Rohini K., Horswill, Ian, Buvac, Sasa, Siegel, Eric V., Fehling, Michael
The Association for the Advancement of Artificial Intelligence (AAAI) held its 1995 Fall Symposia Series on 10 to 12 November in Cambridge, Massachusetts. This article contains summaries of the eight symposia that were conducted: (1) Active Learning; (2) Adaptation of Knowledge for Reuse; (3) AI Applications in Knowledge Navigation and Retrieval; (4) Computational Models for Integrating Language and Vision; (5) Embodied Language and Action Symposium; (6) Formalizing Context; (7) Genetic Programming; and (8) Rational Agency: Concepts, Theories, Models, and Applications.
Woody Bledsoe: His Life and Legacy
Ballantyne, Michael, Boyer, Robert S., Hines, Larry
(Bledsoe 1976). We didn't know we were being We spent a lot of time reading by ourselves, because most of the time the other grades were having their classes. But we DID learn, and had some pretty died on 4 October 1995 of ALS, good teachers (Bledsoe 1976). Woody was one of the and recalls spending "hours just roaming founders of AI, making early contributions in around, sometimes working mathematics pattern recognition and automated reasoning. He continued to make significant contributions When Woody was 12, his father died. It was to AI throughout his long career. His a devastating blow both emotionally and legacy consists not only of his scientific work financially. As Woody recalled, "We were poor but also of several generations of scientists before, but after papa died in January 1934, who learned from Woody the joy of scientific things got worse" (Bledsoe 1976). He and the research and the way to go about it. Woody's rest of his brothers and sisters worked dreary enthusiasm, his perpetual sense of optimism, 10-hour days to make ends meet. He to humanity offered those who knew him the found work in north Texas driving a tractor all hope and comfort that truly good and great night. After a month, he hopped a freight men do exist. He graduated little farm near Maysville, Oklahoma. He moved to Oklahoma to try his took a job as a dishwasher, working 12-hour luck at farming. Woody was the fourth child days 7 days a week. In for his heroic activities in arranging the April, the restaurant owner forced him back transportation of troops across the Rhine into working 12-hour days, which was too in March, 1945. He left the Rhine bridges except the one at Remagen university without saying goodbye and had been destroyed by the retreating German joined the United States Army. Patton's Third Army decided to cross the Rhine by boats near Frankfurt rather than suffer the delay of waiting for bridge construction. Therefore the went to Officer's Candidate School (OCS) Army Corps of Engineers hauled naval By the time he in 1942, he had been promoted to second landing craft (designed for beach landings) lieutenant. While at OCS, Woody had an by truck across Europe to ferry experience that had a profound effect on him: troops across the Rhine. Bledsoe, by then an Army captain, recalls that there was Another experience at OCS at Fort only light enemy fire during the crossing; Belvoir left a lasting impression on me. His first "research" was experimenting army truck. The simple idea opened the flap and said, "Get out here. of backing the trucks into the water, Let's do the map reading." He would later father a to get on with the work, to finish the son, Greg, born in March 1947. It taught me that "if we have to had two more children, Pam and Lance.
Improved Use of Continuous Attributes in C4.5
A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modifications lead to smaller decision trees with higher predictive accuracies. Results also confirm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits.
Mean Field Theory for Sigmoid Belief Networks
Saul, L. K., Jaakkola, T., Jordan, M. I.
We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition---the classification of handwritten digits.
Logarithmic-Time Updates and Queries in Probabilistic Networks
Delcher, A. L., Grove, A. J., Kasif, S., Pearl, J.
Traditional databases commonly support efficient query and update procedures that operate in time which is sublinear in the size of the database. Our goal in this paper is to take a first step toward dynamic reasoning in probabilistic databases with comparable efficiency. We propose a dynamic data structure that supports efficient algorithms for updating and querying singly connected Bayesian networks. In the conventional algorithm, new evidence is absorbed in O(1) time and queries are processed in time O(N), where N is the size of the network. We propose an algorithm which, after a preprocessing phase, allows us to answer queries in time O(log N) at the expense of O(log N) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases. We briefly discuss a potential application of dynamic probabilistic reasoning in computational biology.
Effects of Noise on Convergence and Generalization in Recurrent Networks
Jim, Kam, Horne, Bill G., Giles, C. Lee
We introduce and study methods of inserting synaptic noise into dynamically-driven recurrent neural networks and show that applying acontrolled amount of noise during training may improve convergence and generalization. In addition, we analyze the effects of each noise parameter (additive vs. multiplicative, cumulative vs. non-cumulative, per time step vs. per string) and predict that best overall performance can be achieved by injecting additive noise at each time step. Extensive simulations on learning the dual parity grammar from temporal strings substantiate these predictions.
Learning Many Related Tasks at the Same Time with Backpropagation
Hinton [6] proposed that generalization in artificial neural nets should improve if nets learn to represent the domain's underlying regularities. Abu-Mustafa's hints work [1] shows that the outputs of a backprop net can be used as inputs through which domainspecific informationcan be given to the net. We extend these ideas by showing that a backprop net learning many related tasks at the same time can use these tasks as inductive bias for each other and thus learn better. We identify five mechanisms by which multitask backprop improves generalization and give empirical evidence that multitask backprop generalizes better in real domains.
A Rapid Graph-based Method for Arbitrary Transformation-Invariant Pattern Classification
Sperduti, Alessandro, Stork, David G.
We present a graph-based method for rapid, accurate search through prototypes for transformation-invariant pattern classification. Ourmethod has in theory the same recognition accuracy as other recent methods based on ''tangent distance" [Simard et al., 1994], since it uses the same categorization rule. Nevertheless ours is significantly faster during classification because far fewer tangent distancesneed be computed. Crucial to the success of our system are 1) a novel graph architecture in which transformation constraints and geometric relationships among prototypes are encoded duringlearning, and 2) an improved graph search criterion, used during classification. These architectural insights are applicable toa wide range of problem domains.