Goto

Collaborating Authors

 Avila


Neural Relational Learning Through Semi-Propositionalization of Bottom Clauses

AAAI Conferences

Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis in social networks. The CILP++ system is a neural-symbolic system which can perform efficient relational learning, by being able to process first-order logic knowledge into a neural network. CILP++ relies on BCP, a recently discovered propositionalization algorithm, to perform relational learning. However, efficient knowledge extraction from such networks is an open issue and features generated by BCP do not have an independent relational description, which prevents sound knowledge extraction from such networks. We present a methodology for generating independent propositional features for BCP by using semi-propositionalization of bottom clauses. Empirical results obtained in comparison with the original version of BCP show that this approach has comparable accuracy and runtimes, while allowing proper relational knowledge representation of features for knowledge extraction from CILP++ networks.


Neural-Symbolic Learning and Reasoning: Contributions and Challenges

AAAI Conferences

The goal of neural-symbolic computation is to integrate robust connectionist learning and sound symbolic reasoning. With the recent advances in connectionist learning, in particular deep neural networks, forms of representation learning have emerged. However, such representations have not become useful for reasoning. Results from neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and reasoning in neural computation. This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar.


Reports of the AAAI 2012 Conference Workshops

AI Magazine

The AAAI-12 Workshop program was held Sunday and Monday, July 22–23, 2012 at the Sheraton Centre Toronto Hotel in Toronto, Ontario, Canada. The AAAI-12 workshop program included 9 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were Activity Context Representation: Techniques and Languages, AI for Data Center Management and Cloud Computing, Cognitive Robotics, Grounding Language for Physical Systems, Human Computation, Intelligent Techniques for Web Personalization and Recommendation, Multiagent Pathfinding, Neural-Symbolic Learning and Reasoning, Problem Solving Using Classical Planners, Semantic Cities. This article presents short summaries of those events.


A Neural-Symbolic Cognitive Agent with a Mind’s Eye

AAAI Conferences

The DARPA Mind’s Eye program seeks to develop in machines a capability that currently exists only in animals: visual intelligence. This paper describes a Neural-Symbolic Cognitive Agent that integrates neural learning, symbolic knowledge representation and temporal reasoning in a visual intelligent system that can reason about actions of entities observed in video. Results have shown that the system is able to learn and represent the underlying semantics of the actions from observation and use this for several visual intelligent tasks, like recognition, description, anomaly detection and gap-filling.


Reports of the AAAI 2010 Conference Workshops

AI Magazine

The AAAI-10 Workshop program was held Sunday and Monday, July 11–12, 2010 at the Westin Peachtree Plaza in Atlanta, Georgia. The AAAI-10 workshop program included 13 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Fun, Bridging the Gap between Task and Motion Planning, Collaboratively-Built Knowledge Sources and Artificial Intelligence, Goal-Directed Autonomy, Intelligent Security, Interactive Decision Theory and Game Theory, Metacognition for Robust Social Systems, Model Checking and Artificial Intelligence, Neural-Symbolic Learning and Reasoning, Plan, Activity, and Intent Recognition, Statistical Relational AI, Visual Representations and Reasoning, and Abstraction, Reformulation, and Approximation. This article presents short summaries of those events.