Goto

Collaborating Authors

 tableau




Extending Defeasibility for Propositional Standpoint Logics

Leisegang, Nicholas, Meyer, Thomas, Varzinczak, Ivan

arXiv.org Artificial Intelligence

In this paper, we introduce a new defeasible version of propositional standpoint logic by integrating Kraus et al.'s defeasible conditionals, Britz and Varzinczak's notions of defeasible necessity and distinct possibility, along with Leisegang et al.'s approach to defeasibility into the standpoint logics of Gómez Álvarez and Rudolph. The resulting logical framework allows for the expression of defeasibility on the level of implications, standpoint modal operators, and standpoint-sharpening statements. We provide a preferential semantics for this extended language and propose a tableaux calculus, which is shown to be sound and complete with respect to preferential entailment. We also establish the computational complexity of the tableaux procedure to be in PSpace.


How hard is learning to cut? Trade-offs and sample complexity

Khalife, Sammy, Lodi, Andrea

arXiv.org Artificial Intelligence

In the recent years, branch-and-cut algorithms have been the target of data-driven approaches designed to enhance the decision making in different phases of the algorithm such as branching, or the choice of cutting planes (cuts). In particular, for cutting plane selection two score functions have been proposed in the literature to evaluate the quality of a cut: branch-and-cut tree size and gap closed. In this paper, we present new sample complexity lower bounds, valid for both scores. We show that for a wide family of classes $\mathcal{F}$ that maps an instance to a cut, learning over an unknown distribution of the instances to minimize those scores requires at least (up to multiplicative constants) as many samples as learning from the same class function $\mathcal{F}$ any generic target function (using square loss). Our results also extend to the case of learning from a restricted set of cuts, namely those from the Simplex tableau. To the best of our knowledge, these constitute the first lower bounds for the learning-to-cut framework. We compare our bounds to known upper bounds in the case of neural networks and show they are nearly tight. We illustrate our results with a graph neural network selection evaluated on set covering and facility location integer programming models and we empirically show that the gap closed score is an effective proxy to minimize the branch-and-cut tree size. Although the gap closed score has been extensively used in the integer programming literature, this is the first principled analysis discussing both scores at the same time both theoretically and computationally.


CNOT-Optimal Clifford Synthesis as SAT

Shaik, Irfansha, van de Pol, Jaco

arXiv.org Artificial Intelligence

Clifford circuit optimization is an important step in the quantum compilation pipeline. Major compilers employ heuristic approaches. While they are fast, their results are often suboptimal. Minimization of noisy gates, like 2-qubit CNOT gates, is crucial for practical computing. Exact approaches have been proposed to fill the gap left by heuristic approaches. Among these are SAT based approaches that optimize gate count or depth, but they suffer from scalability issues. Further, they do not guarantee optimality on more important metrics like CNOT count or CNOT depth. A recent work proposed an exhaustive search only on Clifford circuits in a certain normal form to guarantee CNOT count optimality. But an exhaustive approach cannot scale beyond 6 qubits. In this paper, we incorporate search restricted to Clifford normal forms in a SAT encoding to guarantee CNOT count optimality. By allowing parallel plans, we propose a second SAT encoding that optimizes CNOT depth. By taking advantage of flexibility in SAT based approaches, we also handle connectivity restrictions in hardware platforms, and allow for qubit relabeling. We have implemented the above encodings and variations in our open source tool Q-Synth. In experiments, our encodings significantly outperform existing SAT approaches on random Clifford circuits. We consider practical VQE and Feynman benchmarks to compare with TKET and Qiskit compilers. In all-to-all connectivity, we observe reductions up to 32.1% in CNOT count and 48.1% in CNOT depth. Overall, we observe better results than TKET in the CNOT count and depth. We also experiment with connectivity restrictions of major quantum platforms. Compared to Qiskit, we observe up to 30.3% CNOT count and 35.9% CNOT depth further reduction.


Pauli Network Circuit Synthesis with Reinforcement Learning

Dubal, Ayushi, Kremer, David, Martiel, Simon, Villar, Victor, Wang, Derek, Cruz-Benito, Juan

arXiv.org Artificial Intelligence

We introduce a Reinforcement Learning (RL)-based method for re-synthesis of quantum circuits containing arbitrary Pauli rotations alongside Clifford operations. By collapsing each sub-block to a compact representation and then synthesizing it step-by-step through a learned heuristic, we obtain circuits that are both shorter and compliant with hardware connectivity constraints. We find that the method is fast enough and good enough to work as an optimization procedure: in direct comparisons on 6-qubit random Pauli Networks against state-of-the-art heuristic methods, our RL approach yields over 2x reduction in two-qubit gate count, while executing in under 10 milliseconds per circuit. We further integrate the method into a collect-and-re-synthesize pipeline, applied as a Qiskit transpiler pass, where we observe average improvements of 20% in two-qubit gate count and depth, reaching up to 60% for many instances, across the Benchpress benchmark. These results highlight the potential of RL-driven synthesis to significantly improve circuit quality in realistic, large-scale quantum transpilation workloads.


ChatGPT in Data Visualization Education: A Student Perspective

Kim, Nam Wook, Ko, Hyung-Kwon, Myers, Grace, Bach, Benjamin

arXiv.org Artificial Intelligence

Unlike traditional educational chatbots that rely on pre-programmed responses, large-language model-driven chatbots, such as ChatGPT, demonstrate remarkable versatility and have the potential to serve as a dynamic resource for addressing student needs from understanding advanced concepts to solving complex problems. This work explores the impact of such technology on student learning in an interdisciplinary, project-oriented data visualization course. Throughout the semester, students engaged with ChatGPT across four distinct projects, including data visualizations and implementing them using a variety of tools including Tableau, D3, and Vega-lite. We collected conversation logs and reflection surveys from the students after each assignment. In addition, we conducted interviews with selected students to gain deeper insights into their overall experiences with ChatGPT. Our analysis examined the advantages and barriers of using ChatGPT, students' querying behavior, the types of assistance sought, and its impact on assignment outcomes and engagement. Based on the findings, we discuss design considerations for an educational solution that goes beyond the basic interface of ChatGPT, specifically tailored for data visualization education.


Pondering the Ugly Underbelly, and Whether Images are Real

Communications of the ACM

I tell my computer science students that we go through proofs not to show WHAT is true, but WHY it's true. During an interesting consultation with a graduate student who wanted to know whether I proved the Cook-Levin Theorem when I taught that result in the Foundations of Computing class, I said that I had, indeed, and that every teacher should. As is the human predilection, I became ever more fully convinced that I was right as I expounded on it. The textbook in use is Michael Sipser 3rd Edition [Sipser]. In Section 7.4, we see Cook-Levin as Theorem 7.37: SAT is NP-Complete.


Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases

Zese, Riccardo, Lamma, Evelina, Riguzzi, Fabrizio

arXiv.org Artificial Intelligence

The necessity to manage inconsistency in Description Logics Knowledge Bases (KBs) has come to the fore with the increasing importance gained by the Semantic Web, where information comes from different sources that constantly change their content and may contain contradictory descriptions when considered either alone or together. Classical reasoning algorithms do not handle inconsistent KBs, forcing the debugging of the KB in order to remove the inconsistency. In this paper, we exploit an existing probabilistic semantics called DISPONTE to overcome this problem and allow queries also in case of inconsistent KBs. We implemented our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal. Moreover, we formally compare the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks.


On simple expectations and observations of intelligent agents: A complexity study

Chakraborty, Sourav, Ghosh, Avijeet, Ghosh, Sujata, Schwarzentruber, François

arXiv.org Artificial Intelligence

Reasoning about knowledge among multiple agents plays an important role in studying real-world problems in a distributed setting, e.g., in communicating processes, protocols, strategies and games. Multi-agent epistemic logic (EL) [1] and its dynamic extensions, popularly known as dynamic epistemic logics (DEL) [2] are well-known logical systems to specify and reason about such dynamic interactions of knowledge. Traditionally, agents' knowledge is about facts and EL/DEL mostly deals with this phenomenon of'knowing that'. More recently, the notions of'knowing whether', 'knowing why' and'knowing how' have also been investigated from a formal viewpoint [3]. These agents also have expectations about the world around them, and they reason based on what they observe around them, and such observations may or may not match the expectations they have about their surroundings.