FF-NSL: Feed-Forward Neural-Symbolic Learner
Cunnington, Daniel, Law, Mark, Russo, Alessandra, Lobo, Jorge
–arXiv.org Artificial Intelligence
Logic-based machine learning [1, 2] learns interpretable knowledge expressed in the form of a logic program, called a hypothesis, that explains labelled examples in the context of (optional) background knowledge. Recent logic-based machine learning systems have demonstrated the ability to learn highly complex and noise-tolerant hypotheses in a data efficient manner (e.g., Learning from Answer Sets (LAS) [2]). However, they require labelled examples to be specified in a structured logical form, which limits their applicability to many real-world problems. On the other hand, differentiable learning systems, such as (deep) neural networks, are able to learn directly from unstructured data, but they require large amounts of training data and their learned models are difficult to interpret [3]. Within neural-symbolic artificial intelligence, many approaches aim to integrate neural and symbolic systems with the goal of preserving the benefits of both paradigms [4, 5]. Most neural-symbolic integrations assume the existence of pre-defined knowledge expressed symbolically, or logically, and focus on training a neural network to extract symbolic features from raw unstructured data [6-10]. In this paper, we introduce Feed-Forward Neural-Symbolic Learner (FFNSL), a neural-symbolic learning framework that assumes the opposite. Given a pre-trained neural network, FFNSL uses a logic-based machine learning system robust to noise to learn a logic-based hypothesis whose symbolic features are constructed from neural network predictions.
arXiv.org Artificial Intelligence
Jan-5-2023
- Country:
- Europe (1.00)
- North America > United States (1.00)
- Genre:
- Research Report (1.00)
- Industry:
- Government (0.67)
- Information Technology (0.46)
- Technology: