Goto

Collaborating Authors

 Zhang, Mingyi


Benchmarking Continual Learning from Cognitive Perspectives

arXiv.org Artificial Intelligence

Continual learning addresses the problem of continuously acquiring and transferring knowledge without catastrophic forgetting of old concepts. While humans achieve continual learning via diverse neurocognitive mechanisms, there is a mismatch between cognitive properties and evaluation methods of continual learning models. First, the measurement of continual learning models mostly relies on evaluation metrics at a micro-level, which cannot characterize cognitive capacities of the model. Second, the measurement is method-specific, emphasizing model strengths in one aspect while obscuring potential weaknesses in other respects. To address these issues, we propose to integrate model cognitive capacities and evaluation metrics into a unified evaluation paradigm. We first characterize model capacities via desiderata derived from cognitive properties supporting human continual learning. The desiderata concern (1) adaptability in varying lengths of task sequence; (2) sensitivity to dynamic task variations; and (3) efficiency in memory usage and training time consumption. Then we design evaluation protocols for each desideratum to assess cognitive capacities of recent continual learning models. Experimental results show that no method we consider has satisfied all the desiderata and is still far away from realizing truly continual learning. Although some methods exhibit some degree of adaptability and efficiency, no method is able to identify task relationships when encountering dynamic task variations, or achieve a trade-off in learning similarities and differences between tasks. Inspired by these results, we discuss possible factors that influence model performance in these desiderata and provide guidance for the improvement of continual learning models.


Neuro-Symbolic Recommendation Model based on Logic Query

arXiv.org Artificial Intelligence

A recommendation system assists users in finding items that are relevant to them. Existing recommendation models are primarily based on predicting relationships between users and items and use complex matching models or incorporate extensive external information to capture association patterns in data. However, recommendation is not only a problem of inductive statistics using data; it is also a cognitive task of reasoning decisions based on knowledge extracted from information. Hence, a logic system could naturally be incorporated for the reasoning in a recommendation task. However, although hard-rule approaches based on logic systems can provide powerful reasoning ability, they struggle to cope with inconsistent and incomplete knowledge in real-world tasks, especially for complex tasks such as recommendation. Therefore, in this paper, we propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression and then transforms the recommendation prediction into a query task based on this logic expression. The logic expressions are then computed based on the modular logic operations of the neural network. We also construct an implicit logic encoder to reasonably reduce the complexity of the logic computation. Finally, a user's interest items can be queried in the vector space based on the computation results. Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.


A Well-Founded Semantics for Basic Logic Programs with Arbitrary Abstract Constraint Atoms

AAAI Conferences

Logic programs with abstract constraint atoms proposed by Marek and Truszczynski are very general logic programs.They are general enough to captureaggregate logic programs as well asrecently proposed description logic programs.In this paper, we propose a well-founded semantics for basic logic programs with arbitrary abstract constraint atoms, which are sets of rules whose heads have exactly one atom. Weshow that similar to the well-founded semanticsof normal logic programs, it has many desirable properties such as that it can becomputed in polynomial time, and is always correct with respect to theanswer set semantics. This paves the way for using our well-founded semanticsto simplify these logic programs. We also show how our semantics can be applied toaggregate logic programs and description logic programs, and compare itto the well-founded semantics already proposed for these logic programs.


Language Splitting and Relevance-Based Belief Change in Horn Logic

AAAI Conferences

This paper presents a framework for relevance-based belief change in propositional Horn logic. We firstly establish a parallel interpolation theorem for Horn logic and show that Parikh's Finest Splitting Theorem holds with Horn formulae. By reformulating Parikh's relevance criterion in the setting of Horn belief change, we construct a relevance-based partial meet Horn contraction operator and provide a representation theorem for the operator. Interestingly, we find that this contraction operator can be fully characterised by Delgrande and Wassermann's postulates for partial meet Horn contraction as well as Parikh's relevance postulate without requiring any change on the postulates, which is qualitatively different from the case in classical propositional logic.