Siber, Julian
Stream-Based Monitoring of Algorithmic Fairness
Baumeister, Jan, Finkbeiner, Bernd, Scheerer, Frederik, Siber, Julian, Wagenpfeil, Tobias
Automatic decision and prediction systems are increasingly deployed in applications where they significantly impact the livelihood of people, such as for predicting the creditworthiness of loan applicants or the recidivism risk of defendants. These applications have given rise to a new class of algorithmic-fairness specifications that require the systems to decide and predict without bias against social groups. Verifying these specifications statically is often out of reach for realistic systems, since the systems may, e.g., employ complex learning components, and reason over a large input space. In this paper, we therefore propose stream-based monitoring as a solution for verifying the algorithmic fairness of decision and prediction systems at runtime. Concretely, we present a principled way to formalize algorithmic fairness over temporal data streams in the specification language RTLola and demonstrate the efficacy of this approach on a number of benchmarks. Besides synthetic scenarios that particularly highlight its efficiency on streams with a scaling amount of data, we notably evaluate the monitor on real-world data from the recidivism prediction tool COMPAS.
NeuRes: Learning Proofs of Propositional Satisfiability
Ghanem, Mohamed, Schmitt, Frederik, Siber, Julian, Finkbeiner, Bernd
We introduce NeuRes, a neuro-symbolic proof-based SAT solver. Unlike other neural SAT solving methods, NeuRes is capable of proving unsatisfiability as opposed to merely predicting it. By design, NeuRes operates in a certificate-driven fashion by employing propositional resolution to prove unsatisfiability and to accelerate the process of finding satisfying truth assignments in case of unsat and sat formulas, respectively. To realize this, we propose a novel architecture that adapts elements from Graph Neural Networks and Pointer Networks to autoregressively select pairs of nodes from a dynamic graph structure, which is essential to the generation of resolution proofs. Our model is trained and evaluated on a dataset of teacher proofs and truth assignments that we compiled with the same random formula distribution used by NeuroSAT. In our experiments, we show that NeuRes solves more test formulas than NeuroSAT by a rather wide margin on different distributions while being much more data-efficient. Furthermore, we show that NeuRes is capable of largely shortening teacher proofs by notable proportions. We use this feature to devise a bootstrapped training procedure that manages to reduce a dataset of proofs generated by an advanced solver by ~23% after training on it with no extra guidance.
Counterfactuals Modulo Temporal Logics
Finkbeiner, Bernd, Siber, Julian
Evaluating counterfactual statements is a fundamental problem for many approaches to causal reasoning [40]. Such reasoning can for instance be used to explain erroneous system behavior with a counterfactual statement such as'If the input i at the first position of the observed computation π had not been enabled then the system would not have reached an error e.' which can be formalized using the counterfactual operator and the temporal operator F: π ( i) ( Fe). Since the foundational work by Lewis[38] on the formal semantics of counterfactual conditionals, many applications for counterfactuals [28, 5, 34, 46, 3, 15] and some theoretical results on the decidability of the original theory [37] and related notions [20, 2] have been discovered. Still, certain domains have proven elusive for a long time, for instance, theories involving higher-order reasoning and an infinite number of variables. In this paper, we consider a domain that combines both of these aspects: temporal reasoning over infinite sequences. In particular, we consider counterfactual conditionals that relate two properties expressed in temporal logics, such as the temporal property F e from the introductory example. Temporal logics are used ubiquitously as high-level specifications for verification [21, 4] and synthesis [22, 41], and recently have also found use in specifying reinforcement learning tasks [32, 39]. Our work lifts the language of counterfactual reasoning to similar high-level expressions.