RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models

Yang, Yang, XU, Hua, Hu, Zhangyi, Yue, Yutao

arXiv.org Artificial Intelligence 

Nowadays, Large Language Models (LLMs) are able to propose rules in natural language, overcoming constrains of a predefined predicate space inherent in traditional rule learning. However, existing methods using LLMs often overlook the combination effects of rules, and the potential of coupling LLMs with probabilistic rule learning to ensure robust inference is not fully explored. To address this gap, we introduce RLIE, a unified framework that integrates LLMs with probabilistic modeling to learn a set of probabilistic rules. The RLIE framework comprises four stages: (1) Rule generation, where a LLM proposes and filters candidate rules; (2) Logistic regression, which learns the probabilistic weights of the rules for global selection and calibration; (3) Iterative refinement, which continuously optimizes the rule set based on prediction errors; and (4) Evaluation, which compares the performance of the weighted rule set as a direct classifier against various methods of injecting the rules into an LLM. Generated rules are the evaluated with different inference strategies on multiple real-world datasets. While applying rules directly with corresponding weights brings us superior performance, prompting LLMs with rules, weights and classification results from the logistic model will surprising degrade the performance. This result aligns with the observation that LLMs excel at semantic generation and interpretation but are less reliable at fine-grained, controlled probabilistic integration. Our work investigates the potentials and limitations of using LLMs for inductive reasoning tasks, proposing a unified framework which integrates LLMs with classic probabilistic rule combination methods, paving the way for more reliable neuro-symbolic reasoning systems. In data-driven applications and scientific discovery, the goal is not merely to predict outcomes, but to construct a set of verifiable, reusable, and composable theories(Zhou et al., 2024; Y ang et al., 2024a; Minh et al., 2022). These theories can enable explainable, auditable decisions while driving the discovery of new knowledge and underlying structures(Y ang et al., 2023; 2024b). These theories can be expressed in formal, structural statements(Cohen et al., 1995; Cropper & Morel, 2021) or natural language hypotheses(Zhou et al., 2024), and they share a common characteristic: they are declarative, testable, and self-contained discriminative patterns that yield predictions verifiable by external evidence In this paper, we do not distinguish between the terms "rule" and "hypothesis", and will use "rule" throughout the text for consistency.