Plotting

 Cohn, David A.


Robot Learning: Exploration and Continuous Domains

Neural Information Processing Systems

David A. Cohn MIT Dept. of Brain and Cognitive Sciences Cambridge, MA 02139 The goal of this workshop was to discuss two major issues: efficient exploration of a learner's state space, and learning in continuous domains. The common themes that emerged in presentations and in discussion were the importance of choosing one'sdomain assumptions carefully, mixing controllers/strategies, avoidance of catastrophic failure, new approaches with difficulties with reinforcement learning, and the importance of task transfer. He suggested that neither "fewer assumptions are better" nor "more assumptions are better" is a tenable position, and that we should strive to find and use standard sets of assumptions. With no such commonality, comparison of techniques and results is meaningless. Under Moore's guidance, the group discussed the possibility of designing an algorithm which used a number of well-chosen assumption sets and switched between them according to their empirical validity.


Robot Learning: Exploration and Continuous Domains

Neural Information Processing Systems

The goal of this workshop was to discuss two major issues: efficient exploration of a learner's state space, and learning in continuous domains. The common themes that emerged in presentations and in discussion were the importance of choosing one's domain assumptions carefully, mixing controllers/strategies, avoidance of catastrophic failure, new approaches with difficulties with reinforcement learning, and the importance of task transfer. He suggested that neither "fewer assumptions are better" nor "more assumptions are better" is a tenable position, and that we should strive to find and use standard sets of assumptions. With no such commonality, comparison of techniques and results is meaningless. Under Moore's guidance, the group discussed the possibility of designing an algorithm which used a number of well-chosen assumption sets and switched between them according to their empirical validity.





Training Connectionist Networks with Queries and Selective Sampling

Neural Information Processing Systems

"Selective sampling" is a form of directed search that can greatly increase the ability of a connectionist network to generalize accurately. Based on information from previous batches of samples, a network may be trained on data selectively sampled from regions in the domain that are unknown. This is realizable in cases when the distribution is known, or when the cost of drawing points from the target distribution is negligible compared to the cost of labeling them with the proper classification. The approach is justified by its applicability to the problem of training a network for power system security analysis. The benefits of selective sampling are studied analytically, and the results are confirmed experimentally. 1 Introduction: Random Sampling vs. Directed Search A great deal of attention has been applied to the problem of generalization based on random samples drawn from a distribution, frequently referred to as "learning from examples." Many natural learning learning systems however, do not simply rely on this passive learning technique, but instead make use of at least some form of directed search to actively examine the problem domain. In many problems, directed search is provably more powerful than passively learning from randomly given examples.