Goto

Collaborating Authors

 unit clause


Explaining Necessary Truths

Kardeş, Gülce, DeDeo, Simon

arXiv.org Artificial Intelligence

Knowing the truth is rarely enough -- we also seek out reasons why the fact is true. While much is known about how we explain contingent truths, we understand less about how we explain facts, such as those in mathematics, that are true as a matter of logical necessity. We present a framework, based in computational complexity, where explanations for deductive truths co-emerge with discoveries of simplifying steps during the search process. When such structures are missing, we revert, in turn, to error-based reasons, where a (corrected) mistake can serve as fictitious, but explanatory, contingency-cause: not making the mistake serves as a reason why the truth takes the form it does. We simulate human subjects, using GPT-4o, presented with SAT puzzles of varying complexity and reasonableness, validating our theory and showing how its predictions can be tested in future human studies.


Perceptron Learning of SAT

Neural Information Processing Systems

Boolean satisfiability (SAT) as a canonical NP-complete decision problem is one of the most important problems in computer science. In practice, real-world SAT sentences are drawn from a distribution that may result in efficient algorithms for their solution. Such SAT instances are likely to have shared characteristics and substructures. This work approaches the exploration of a family of SAT solvers as a learning problem. In particular, we relate polynomial time solvability of a SAT subset to a notion of margin between sentences mapped by a feature function into a Hilbert space.


DeciLS-PBO: an Effective Local Search Method for Pseudo-Boolean Optimization

Jiang, Luyu, Ouyang, Dantong, Zhang, Qi, Zhang, Liming

arXiv.org Artificial Intelligence

Local search is an effective method for solving large-scale combinatorial optimization problems, and it has made remarkable progress in recent years through several subtle mechanisms. In this paper, we found two ways to improve the local search algorithms in solving Pseudo-Boolean Optimization (PBO): Firstly, some of those mechanisms such as unit propagation are merely used in solving MaxSAT before, which can be generalized to solve PBO as well; Secondly, the existing local search algorithms utilize the heuristic on variables, so-called score, to mainly guide the search. We attempt to gain more insights into the clause, as it plays the role of a middleman who builds a bridge between variables and the given formula. Hence, we first extended the combination of unit propagation-based decimation algorithm to PBO problem, giving a further generalized definition of unit clause for PBO problem, and apply it to the existing solver LS-PBO for constructing an initial assignment; then, we introduced a new heuristic on clauses, dubbed care, to set a higher priority for the clauses that are less satisfied in current iterations. Experiments on benchmarks from the most recent PB Competition, as well as three real-world application benchmarks including minimum-width confidence band, wireless sensor network optimization, and seating arrangement problems show that our algorithm DeciLS-PBO has a promising performance compared to the state-of-the-art algorithms.


A Curious New Result of Resolution Strategies in Negation-Limited Inverters Problem

Ando, Ruo, Takefuji, Yoshiyasu

arXiv.org Artificial Intelligence

Generally, negation-limited inverters problem is known as a puzzle of constructing an inverter with AND gates and OR gates and a few inverters. In this paper, we introduce a curious new result about the effectiveness of two powerful ATP (Automated Theorem Proving) strategies on tackling negation limited inverter problem. Two resolution strategies are UR (Unit Resulting) resolution and hyper-resolution. In experiment, we come two kinds of automated circuit construction: 3 input/output inverters and 4 input/output BCD Counter Circuit. Both circuits are constructed with a few limited inverters. Curiously, it has been turned out that UR resolution is drastically faster than hyper-resolution in the measurement of the size of SOS (Set of Support). Besides, we discuss the syntactic and semantic criteria which might causes considerable difference of computation cost between UR resolution and hyper-resolution.


A New Gene Selection Algorithm using Fuzzy-Rough Set Theory for Tumor Classification

Farahbakhshian, Seyedeh Faezeh, Ahvanooey, Milad Taleby

arXiv.org Machine Learning

In statistics and machine learning, feature selection is the process of picking a subset of relevant attributes for utilizing in a predictive model. Recently, rough set-based feature selection techniques, that employ feature dependency to perform selection process, have been drawn attention. Classification of tumors based on gene expression is utilized to diagnose proper treatment and prognosis of the disease in bioinformatics applications. Microarray gene expression data includes superfluous feature genes of high dimensionality and smaller training instances. Since exact supervised classification of gene expression instances in such high-dimensional problems is very complex, the selection of appropriate genes is a crucial task for tumor classification. In this study, we present a new technique for gene selection using a discernibility matrix of fuzzy-rough sets. The proposed technique takes into account the similarity of those instances that have the same and different class labels to improve the gene selection results, while the state-of-the art previous approaches only address the similarity of instances with different class labels. To meet that requirement, we extend the Johnson reducer technique into the fuzzy case. Experimental results demonstrate that this technique provides better efficiency compared to the state-of-the-art approaches.


Search versus Decision: The Opacity of Backbones and Backdoors Under a Weak Assumption

Hemaspaandra, Lane A., Narváez, David E.

arXiv.org Artificial Intelligence

Backdoors and backbones of Boolean formulas are hidden structural properties. A natural goal, already in part realized, is that solver algorithms seek to obtain substantially better performance by exploiting these structures. However, the present paper is not intended to improve the performance of SAT solvers, but rather is a cautionary paper. In particular, the theme of this paper is that there is a potential chasm between the existence of such structures in the Boolean formula and being able to effectively exploit them. This does not mean that these structures are not useful to solvers. It does mean that one must be very careful not to assume that it is computationally easy to go from the existence of a structure to being able to get one's hands on it and/or being able to exploit the structure. For example, in this paper we show that, under the assumption that P $\neq$ NP, there are easily recognizable families of Boolean formulas with strong backdoors that are easy to find, yet for which it is hard (in fact, NP-complete) to determine whether the formulas are satisfiable. We also show that, also under the assumption P $\neq$ NP, there are easily recognizable sets of Boolean formulas for which it is hard (in fact, NP-complete) to determine whether they have a large backbone.


Improved Results for Minimum Constraint Removal

Eiben, Eduard (TU Wien and University of Bergen) | Gemmell, Jonathan (DePaul University) | Kanj, Iyad (DePaul University) | Youngdahl, Andrew (DePaul University)

AAAI Conferences

Given a set of obstacles and two designated points in the plane, the Minimum Constraint Removal problem asks for a minimum number of obstacles that can be removed so that a collision-free path exists between the two designated points. It is a well-studied problem in both robotic motion planning and wireless computing that has been shown to be NP-hard in various settings. In this work, we extend the study of Minimum Constraint Removal. We start by presenting refined NP-hardness reductions for the two cases: (1) when all the obstacles are axes-parallel rectangles, and (2) when all the obstacles are line segments such that no three intersect at the same point. These results improve on existing results in the literature. As a byproduct of our NP-hardness reductions, we prove that, unless the Exponential-Time Hypothesis (ETH) fails, Minimum Constraint Removal cannot be solved in subexponential time 2 o ( n ) , where n is the number of obstacles in the instance. This shows that significant improvement on the brute-force 2 O ( n ) -time algorithm is unlikely. We then present a subexponential-time algorithm for instances of Minimum Constraint Removal in which the number of obstacles that overlap at any point is constant; the algorithm runs in time 2 O (√ N ) , where N is the number of the vertices in the auxiliary graph associated with the instance of the problem. We show that significant improvement on this algorithm is unlikely by showing that, unless ETH fails, Minimum Constraint Removal with bounded overlap number cannot be solved in time 2 o (√ N ) . We describe several exact algorithms and approximation algorithms that leverage heuristics and discuss their performance in an extensive empirical simulation.


Local Backbones

de Haan, Ronald, Kanj, Iyad, Szeider, Stefan

arXiv.org Artificial Intelligence

A backbone of a propositional CNF formula is a variable whose truth value is the same in every truth assignment that satisfies the formula. The notion of backbones for CNF formulas has been studied in various contexts. In this paper, we introduce local variants of backbones, and study the computational complexity of detecting them. In particular, we consider k-backbones, which are backbones for sub-formulas consisting of at most k clauses, and iterative k-backbones, which are backbones that result after repeated instantiations of k-backbones. We determine the parameterized complexity of deciding whether a variable is a k-backbone or an iterative k-backbone for various restricted formula classes, including Horn, definite Horn, and Krom. We also present some first empirical results regarding backbones for CNF-Satisfiability (SAT). The empirical results we obtain show that a large fraction of the backbones of structured SAT instances are local, in contrast to random instances, which appear to have few local backbones.


Non-Restarting SAT Solvers with Simple Preprocessing Can Efficiently Simulate Resolution

Beame, Paul (University of Washington) | Sabharwal, Ashish (IBM Watson Research Center)

AAAI Conferences

Propositional satisfiability (SAT) solvers based on conflict directed clause learning (CDCL) implicitly produce resolution refutations of unsatisfiable formulas. The precise class of formulas for which they can produce polynomial size refutations has been the subject of several studies, with special focus on the clause learning aspect of these solvers. The results, however, assume the use of non-standard and non-asserting learning schemes, or rely on polynomially many restarts for simulating individual steps of a resolution refutation, or work with a theoretical model that significantly deviates from certain key aspects of all modern CDCL solvers such as learning only one asserting clause from each conflict and other techniques such as conflict guided backjumping and phase saving. We study non-restarting CDCL solvers that learn only one asserting clause per conflict and show that, with simple preprocessing that depends only on the number of variables of the input formula, such solvers can polynomially simulate resolution. We show, moreover, that this preprocessing allows one to convert any CDCL solver to one that is non-restarting.


Set Constraint Model and Automated Encoding into SAT: Application to the Social Golfer Problem

Lardeux, Frédéric, Monfroy, Eric, Crawford, Broderick, Soto, Ricardo

arXiv.org Artificial Intelligence

A CSP is defined by some variables (generally over finite domains) and constraints between these variables. Solving a CSP consists in finding assignments of the variables that satisfy the constraints. One of the main strength of CSP is declarativity: variables can be of various types (finite domains, floating point numbers, intervals, sets,...) and constraints as well (linear arithmetic constraints, set constraints, non linear constraints, Boolean constraints, symbolic constraints,...). Moreover, the so-called global constraints not only improve solving efficiency but also declarativity: they propose new constructs and relations such as alldifferent (to enforce that all the variables of a list have different values), cumulative (to schedule tasks sharing resources),... On the other hand, the propositional satisfiability problem (SAT) [12] is restricted (in terms of declarativity) to Boolean variables and propositional formulae. However, SAT solvers can now handle huge SAT instances (millions of variables).