Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.
Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.
The Neural Information Processing Systems (NIPS) workshop on hybrid neural symbolic integration, organized by Stefan Wermter and Ron Sun, was held on 4 to 5 December 1998 (right after the NIPS main conference). In this well-attended workshop, 27 papers were presented, among them were 8 were invited talks in this research area. Overall, the workshop was wide ranging in scope, covering the essential aspects and strands of hybrid systems research, and successfully addressed many important issues of hybrid system research. Two panels were also presented. The panel entitled "Issues of Representation in Hybrid Models" was chaired by Sun.
In this article, we describe some recent results and trends concerning hybrid neural symbolic systems based on a recent workshop on hybrid neural symbolic integration. The Neural Information Processing Systems (NIPS) workshop on hybrid neural symbolic integration, organized by Stefan Wermter and Ron Sun, was held on 4 to 5 December 1998 in Breckenridge, Colorado.
Garcez, Artur d'Avila (City University London) | Besold, Tarek R. (Universitaet Onsnabrueck) | Raedt, Luc de (KU Leuven) | Földiak, Peter (University of St. Andrews) | Hitzler, Pascal (Wright State University) | Icard, Thomas (Stanford University) | Kühnberger, Kai-Uwe (Universitaet Osnabrueck) | Lamb, Luis C. (Institute of Informatics, UFRGS) | Miikkulainen, Risto (University of Texas at Austin) | Silver, Daniel L. (Acadia University)
The goal of neural-symbolic computation is to integrate robust connectionist learning and sound symbolic reasoning. With the recent advances in connectionist learning, in particular deep neural networks, forms of representation learning have emerged. However, such representations have not become useful for reasoning. Results from neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and reasoning in neural computation. This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar.