Collaborating Authors

Using a Domain-Independent Introspection Improve Memory Search Angela 1 C. Kennedy

AAAI Conferences

Memory search is a basic level cognitive task that plays an instrumental role in producing many human behaviors. As such, we have investigated the mental states and mechanisms of human subjects' analogical memory search in order to effectively model them in a computational problem solver. Three sets of these mental states and mechanisms seem to be important, regardless of the task domain. First, subjects use knowledge goals as a filter when looking for relevant experiences in memory. Second, as an additional retrieval filter, they use a similarity metric that finds a solution in memory whose most important weakest preconditions are satisfied in the current state. This metric requires the explicit representation of the reasoner's belief of the relative importance of the preconditions, and introspection and adjustment of those beliefs through the comparison of actual and expected performance can be used to improve the memory search. Third, by explicitly representing how much search the reasoner has undertaken and its required threshold for the exactness of the match for the retrieved memory, it can dynamically adjust its memory search based on the contents of its knowledge-base.

The FERMI System: Inducing Iterative Macro-operatorsfrom Experience

AAAI Conferences

Automated methods of exploiting past experience to reduce search vary from analogical transfer to chunking control knowledge. In the latter category, various forms of composing problem-solving operators into larger units have been explored. However, the automated formulation of effective macro-operators requires more than the storage and parametrization of individual linear operator sequences.

alogic ay of

AAAI Conferences

The machine learning approaches to acquiring strategic knowledge typically start with a general problem solving engine and accumulate experience by analyzing its search episodes.

The Case for Case-Based Transfer Learning

AI Magazine

Case-based reasoning (CBR) is a problem-solving process in which a new problem is solved by retrieving a similar situation and reusing its solution. Transfer learning occurs when, after gaining experience from learning how to solve source problems, the same learner exploits this experience to improve performance and/or learning on target problems. In transfer learning, the differences between the source and target problems characterize the transfer distance. CBR can support transfer learning methods in multiple ways. We illustrate how CBR and transfer learning interact and characterize three approaches for using CBR in transfer learning: (1) as a transfer learning method, (2) for problem learning, and (3) to transfer knowledge between sets of problems. We describe examples of these approaches from our own and related work and discuss applicable transfer distances for each. We close with conclusions and directions for future research applying CBR to transfer learning.