Goto

Collaborating Authors

The Case for Case-Based Transfer Learning

AI Magazine

Transfer learning occurs when, after gaining experience from learning how to solve source problems, the same learner exploits this experience to improve performance and learning on target problems. In transfer learning, the differences between the source and target problems characterize the transfer distance. CBR can support transfer learning methods in multiple ways. We illustrate how CBR and transfer learning interact and characterize three approaches for using CBR in transfer learning: (1) as a transfer learning method, (2) for problem learning, and (3) to transfer knowledge between sets of problems. We describe examples of these approaches from our own and related work and discuss applicable transfer distances for each.


Measuring the Level of Transfer Learning by an AP Physics Problem-solver

AAAI Conferences

Transfer learning is the ability of an agent to apply knowledge learned in previous tasks to new problems or domains. We approach this problem by focusing on model formulation, i.e., how to move from the unruly, broad set of concepts used in everyday life to a concise, formal vocabulary of abstractions that can be used effectively for problem solving. This paper describes how the Companions cognitive architecture uses analogical model formulation to learn to solve AP Physics problems. Our system starts with some basic mathematical skills, a broad common sense ontology, and some qualitative mechanics, but no equations. Our system uses worked solutions to learn how to use equations and modeling assumptions to solve AP Physics problems. We show that this process of analogical model formulation can facilitate learning over a range of types of transfer, in an experiment administered by the Educational Testing Service.


Transfer Learning through Analogy in Games

AI Magazine

We find that a major benefit of analogy is that it reduces the extent to which the source domain must be generalized before transfer. We describe two techniques in particular, minimal ascension and metamapping, that enable analogies to be drawn even when comparing descriptions using different relational vocabularies. Evidence for the effectiveness of these techniques is provided by a large-scale external evaluation, involving a substantial number of novel distant analogs. This is the objective of transfer learning, in which transferred knowledge guides the learning process in a broad range of new situations. In near transfer, the source and target domains are very similar and solutions can be transferred almost verbatim.


Transfer Learning through Analogy in Games

AI Magazine

We report on a series of transfer learning experiments in game domains, in which we use structural analogy from one learned game to speed learning of another related game. We find that a major benefit of analogy is that it reduces the extent to which the source domain must be generalized before transfer. We describe two techniques in particular, minimal ascension and metamapping, that enable analogies to be drawn even when comparing descriptions using different relational vocabularies. Evidence for the effectiveness of these techniques is provided by a large-scale external evaluation, involving a substantial number of novel distant analogs.