Goto

Collaborating Authors

Preface

AAAI Conferences

Artificial intelligence (AI) researchers continue to face large challenges in their quest to develop truly intelligent systems. Topics of interest at the workshop include the representation of symbolic knowledge by connectionist systems; integrated neural-symbolic learning approaches; extraction of symbolic knowledge from trained neural networks; integrated neural-symbolic reasoning; biologically-inspired neural-symbolic integration; integration of logic and probabilities in neural networks; structured learning and relational learning in neural networks; applications in robotics, simulation, fraud prevention, semantic web, soware engineering, fault diagnosis, bioinformatics, visual intelligence, and so on.


The Present and the Future of Hybrid Neural Symbolic Systems Some Reflections from the NIPS Workshop

AI Magazine

In this article, we describe some recent results and trends concerning hybrid neural symbolic systems based on a recent workshop on hybrid neural symbolic integration. The Neural Information Processing Systems (NIPS) workshop on hybrid neural symbolic integration, organized by Stefan Wermter and Ron Sun, was held on 4 to 5 December 1998 in Breckenridge, Colorado.


Symbolic Graph Reasoning Meets Convolutions

Neural Information Processing Systems

Beyond local convolution networks, we explore how to harness various external human knowledge for endowing the networks with the capability of semantic global reasoning. CRF) or constraints for modeling broader dependencies, we propose a new Symbolic Graph Reasoning (SGR) layer, which performs reasoning over a group of symbolic nodes whose outputs explicitly represent different properties of each semantic in a prior knowledge graph. To cooperate with local convolutions, each SGR is constituted by three modules: a) a primal local-to-semantic voting module where the features of all symbolic nodes are generated by voting from local representations; b) a graph reasoning module propagates information over knowledge graph to achieve global semantic coherency; c) a dual semantic-to-local mapping module learns new associations of the evolved symbolic nodes with local representations, and accordingly enhances local features. The SGR layer can be injected between any convolution layers and instantiated with distinct prior graphs. Extensive experiments show incorporating SGR significantly improves plain ConvNets on three semantic segmentation tasks and one image classification task.


Combining Sub-Symbolic and Symbolic Methods for Explainability

arXiv.org Artificial Intelligence

A number of sub-symbolic approaches have been developed to provide insights into the GNN decision making process. These are first important steps on the way to explainability, but the generated explanations are often hard to understand for users that are not AI experts. To overcome this problem, we introduce a conceptual approach combining sub-symbolic and symbolic methods for human-centric explanations, that incorporate domain knowledge and causality. We furthermore introduce the notion of fidelity as a metric for evaluating how close the explanation is to the GNN's internal decision making process. The evaluation with a chemical dataset and ontology shows the explanatory value and reliability of our method.


What is Neural-Symbolic Integration?

#artificialintelligence

Historically, the two encompassing streams of symbolic and sub-symbolic stances to AI evolved in a largely separate manner, with each camp focusing on selected narrow problems of their own. Originally, researchers favored the discrete, symbolic approaches towards AI, targeting problems ranging from knowledge representation, reasoning, and planning to automated theorem proving. While the particular techniques in symbolic AI varied greatly, the field was largely based on mathematical logic, which was seen as the proper ("neat") representation formalism for most of the underlying concepts of symbol manipulation. With this formalism in mind, people used to design large knowledge bases, expert and production rule systems, and specialized programming languages for AI. These symbolic logic representations have then also been commonly used in the machine learning (ML) sub-domain, particularly in the form of Inductive Logic Programming (discussed in the previous article), which introduced the powerful ability to incorporate background knowledge into learning models and algorithms. Amongst the main advantages of this logic-based approach towards ML have been the transparency to humans, deductive reasoning, inclusion of expert knowledge, and structured generalization from small data.