Goto

Collaborating Authors

 Acadia University


Neural-Symbolic Learning and Reasoning: Contributions and Challenges

AAAI Conferences

The goal of neural-symbolic computation is to integrate robust connectionist learning and sound symbolic reasoning. With the recent advances in connectionist learning, in particular deep neural networks, forms of representation learning have emerged. However, such representations have not become useful for reasoning. Results from neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and reasoning in neural computation. This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar.


Lifelong Machine Learning Systems: Beyond Learning Algorithms

AAAI Conferences

Lifelong Machine Learning, or LML, considers systems that can learn many tasks from one or more domains over its lifetime. The goal is to sequentially retain learned knowledge and to selectively transfer that knowledge when learning a new task so as to develop more accurate hypotheses or policies. Following a review of prior work on LML, we propose that it is now appropriate for the AI community to move beyond learning algorithms to more seriously consider the nature of systems that are capable of learning over a lifetime. Reasons for our position are presented and potential counter-arguments are discussed. The remainder of the paper contributes by defining LML, presenting a reference framework that considers all forms of machine learning, and listing several key challenges for and benefits from LML research. We conclude with ideas for next steps to advance the field.


The Consolidation of Task Knowledge for Lifelong Machine Learning

AAAI Conferences

Lifelong Machine Learning (LML) considers situations in which a learner faces a series of tasks over a lifetime. An LML system requires a method of using prior knowledge to learn models for new tasks as efficiently and effectively as possible, and a method of retaining task knowledge after it has been learned. Knowledge retention is necessary for a lifelong learning system, however it is not sufficient. We propose that domain knowledge must be integrated for the purposes of efficient and effective retention and for more efficient and effective transfer during future learning. The process of integration we define as consolidation. The challenge for an LML system is consolidating the knowledge of a new task while maintaining and possibly improving knowledge of prior tasks; this requires a solution to the stability-plasticity problem. This paper provides a summary of prior work by the author on the consolidation problem within various LML systems.