ground fact
Growth Patterns of Inference
What properties of a first-order search space support/hinder inference? What kinds of facts would be most effective to learn? Answering these questions is essential for understanding the dynamics of deductive reasoning and creating large-scale knowledge-based learning systems that support efficient inference. We address these questions by developing a model of how the distribution of ground facts affects inference performance in search spaces. Experiments suggest that uniform search spaces are suitable for larger KBs whereas search spaces with skewed degree distribution show better performance in smaller KBs. A sharp transition in Q/A performance is seen in some cases, suggesting that analysis of the structure of search spaces with existing knowledge should be used to guide the acquisition of new ground facts in learning systems.
- North America > United States > Texas > Travis County > Austin (0.04)
- Asia > Middle East > Israel (0.04)
Simulating Petri nets with Boolean Matrix Logic Programming
Ai, Lun, Muggleton, Stephen H., Liang, Shi-Shun, Baldwin, Geoff S.
Recent attention to relational knowledge bases has sparked a demand for understanding how relations change between entities. Petri nets can represent knowledge structure and dynamically simulate interactions between entities, and thus they are well suited for achieving this goal. However, logic programs struggle to deal with extensive Petri nets due to the limitations of high-level symbol manipulations. To address this challenge, we introduce a novel approach called Boolean Matrix Logic Programming (BMLP), utilising boolean matrices as an alternative computation mechanism for Prolog to evaluate logic programs. Within this framework, we propose two novel BMLP algorithms for simulating a class of Petri nets known as elementary nets. This is done by transforming elementary nets into logically equivalent datalog programs. We demonstrate empirically that BMLP algorithms can evaluate these programs 40 times faster than tabled B-Prolog, SWI-Prolog, XSB-Prolog and Clingo. Our work enables the efficient simulation of elementary nets using Prolog, expanding the scope of analysis, learning and verification of complex systems with logic programming techniques.
- North America > Canada > Ontario > Toronto (0.05)
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (4 more...)
Scaling Relational Inference Using Proofs and Refutations
Mangal, Ravi (Georgia Institute of Technology) | Zhang, Xin (Georgia Institute of Technology) | Kamath, Aditya (Georgia Institute of Technology) | Nori, Aditya V. (Microsoft Research) | Naik, Mayur (Georgia Institute of Technology)
Many inference problems are naturally formulated using hard and soft constraints over relational domains: the desired solution must satisfy the hard constraints, while optimizing the objectives expressed by the soft constraints. Existing techniques for solving such constraints rely on efficiently grounding a sufficient subset of constraints that is tractable to solve. We present an eager-lazy grounding algorithm that eagerly exploits proofs and lazily refutes counterexamples. We show that our algorithm achieves significant speedup over existing approaches without sacrificing soundness for real-world applications from information retrieval and program analysis.
Modeling the Evolution of Knowledge in Learning Systems
Sharma, Abhishek (Cycorp, Inc.) | Forbus, Kenneth D. (Northwestern University)
How do reasoning systems that learn evolve over time? What are the properties of different learning strategies? Characterizing the evolution of these systems is important for understanding their limitations and gaining insights into the interplay between learning and reasoning. We describe an inverse ablation model for studying how large knowledge-based systems evolve: Create a small knowledge base by ablating a large KB, and simulate learning by incrementally re-adding facts, using different strategies to simulate types of learners. For each iteration, reasoning properties (including number of questions answered and run time) are collected, to explore how learning strategies and reasoning interact. We describe several experiments with the inverse ablation model, examining how two different learning strategies perform. Our results suggest that different concepts show different rates of growth, and that the density and distribution of facts that can be learned are important parameters for modulating the rate of learning.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Illinois > Cook County > Evanston (0.04)
- North America > Canada (0.04)
Graph-Based Reasoning and Reinforcement Learning for Improving Q/A Performance in Large Knowledge-Based Systems
Sharma, Abhishek (Northwestern University) | Forbus, Kenneth D. (Northwestern University)
Learning to plausibly reason with minimal user intervention could significantly improve knowledge acquisition. We describe how to integrate graph-based heuristic generalization, higher-order knowledge, and reinforcement learning to learn to produce plausible inferences with only small amounts of user training. Experiments on ResearchCyc KB contents show significant improvement in Q/A performance with high accuracy.
- North America > United States > New York (0.04)
- North America > United States > Illinois > Cook County > Evanston (0.04)
- North America > United States > Hawaii (0.04)
- (2 more...)
- Workflow (0.69)
- Research Report > New Finding (0.46)