Goto

Collaborating Authors

 Dhami, Devendra Singh


Systems with Switching Causal Relations: A Meta-Causal Perspective

arXiv.org Machine Learning

Most work on causality in machine learning assumes that causal relationships are driven by a constant underlying process. However, the flexibility of agents' actions or tipping points in the environmental process can change the qualitative dynamics of the system. As a result, new causal relationships may emerge, while existing ones change or disappear, resulting in an altered causal graph. To analyze these qualitative changes on the causal graph, we propose the concept of meta-causal states, which groups classical causal models into clusters based on equivalent qualitative behavior and consolidates specific mechanism parameterizations. We demonstrate how meta-causal states can be inferred from observed agent behavior, and discuss potential methods for disentangling these states from unlabeled data. Finally, we direct our analysis towards the application of a dynamical system, showing that meta-causal states can also emerge from inherent system dynamics, and thus constitute more than a context-dependent framework in which mechanisms emerge only as a result of external factors.


BlendRL: A Framework for Merging Symbolic and Neural Policy Learning

arXiv.org Artificial Intelligence

Humans can leverage both symbolic reasoning and intuitive reactions. In contrast, reinforcement learning policies are typically encoded in either opaque systems like neural networks or symbolic systems that rely on predefined symbols and rules. This disjointed approach severely limits the agents' capabilities, as they often lack either the flexible low-level reaction characteristic of neural agents or the interpretable reasoning of symbolic agents. To overcome this challenge, we introduce BlendRL, a neuro-symbolic RL framework that harmoniously integrates both paradigms within RL agents that use mixtures of both logic and neural policies. We empirically demonstrate that BlendRL agents outperform both neural and symbolic baselines in standard Atari environments, and showcase their robustness to environmental changes. Additionally, we analyze the interaction between neural and symbolic policies, illustrating how their hybrid use helps agents overcome each other's limitations.


Towards Probabilistic Clearance, Explanation and Optimization

arXiv.org Artificial Intelligence

Employing Unmanned Aircraft Systems (UAS) beyond visual line of sight (BVLOS) is an endearing and challenging task. While UAS have the potential to significantly enhance today's logistics and emergency response capabilities, unmanned flying objects above the heads of unprotected pedestrians induce similarly significant safety risks. In this work, we make strides towards improved safety and legal compliance in applying UAS in two ways. First, we demonstrate navigation within the Probabilistic Mission Design (ProMis) framework. To this end, our approach translates Probabilistic Mission Landscapes (PML) into a navigation graph and derives a cost from the probability of complying with all underlying constraints. Second, we introduce the clearance, explanation, and optimization (CEO) cycle on top of ProMis by leveraging the declaratively encoded domain knowledge, legal requirements, and safety assertions to guide the mission design process. Based on inaccurate, crowd-sourced map data and a synthetic scenario, we illustrate the application and utility of our methods in UAS navigation.


EXPIL: Explanatory Predicate Invention for Learning in Games

arXiv.org Artificial Intelligence

Reinforcement learning (RL) has proven to be a powerful tool for training agents that excel in various games. However, the black-box nature of neural network models often hinders our ability to understand the reasoning behind the agent's actions. Recent research has attempted to address this issue by using the guidance of pretrained neural agents to encode logic-based policies, allowing for interpretable decisions. A drawback of such approaches is the requirement of large amounts of predefined background knowledge in the form of predicates, limiting its applicability and scalability. In this work, we propose a novel approach, Explanatory Predicate Invention for Learning in Games (EXPIL), that identifies and extracts predicates from a pretrained neural agent, later used in the logic-based agents, reducing the dependency on predefined background knowledge. Our experimental evaluation on various games demonstrate the effectiveness of EXPIL in achieving explainable behavior in logic agents while requiring less background knowledge.


Mission Design for Unmanned Aerial Vehicles using Hybrid Probabilistic Logic Program

arXiv.org Artificial Intelligence

Advanced Air Mobility (AAM) is a growing field that demands a deep understanding of legal, spatial and temporal concepts in navigation. Hence, any implementation of AAM is forced to deal with the inherent uncertainties of human-inhabited spaces. Enabling growth and innovation requires the creation of a system for safe and robust mission design, i.e., the way we formalize intentions and decide their execution as trajectories for the Unmanned Aerial Vehicle (UAV). Although legal frameworks have emerged to govern urban air spaces, their full integration into the decision process of autonomous agents and operators remains an open task. In this work we present ProMis, a system architecture for probabilistic mission design. It links the data available from various static and dynamic data sources with legal text and operator requirements by following principles of formal verification and probabilistic modeling. Hereby, ProMis enables the combination of low-level perception and high-level rules in AAM to infer validity over the UAV's state-space. To this end, we employ Hybrid Probabilistic Logic Programs (HPLP) as a unifying, intermediate representation between perception and action-taking. Furthermore, we present methods to connect ProMis with crowd-sourced map data by generating HPLP atoms that represent spatial relations in a probabilistic fashion. Our claims of the utility and generality of ProMis are supported by experiments on a diverse set of scenarios and a discussion of the computational demands associated with probabilistic missions.


United We Pretrain, Divided We Fail! Representation Learning for Time Series by Pretraining on 75 Datasets at Once

arXiv.org Artificial Intelligence

In natural language processing and vision, pretraining is utilized to learn effective representations. Unfortunately, the success of pretraining does not easily carry over to time series due to potential mismatch between sources and target. Actually, common belief is that multi-dataset pretraining does not work for time series! Au contraire, we introduce a new self-supervised contrastive pretraining approach to learn one encoding from many unlabeled and diverse time series datasets, so that the single learned representation can then be reused in several target domains for, say, classification. Specifically, we propose the XD-MixUp interpolation method and the Soft Interpolation Contextual Contrasting (SICC) loss. Empirically, this outperforms both supervised training and other self-supervised pretraining methods when finetuning on low-data regimes. This disproves the common belief: We can actually learn from multiple time series datasets, even from 75 at once.


DeiSAM: Segment Anything with Deictic Prompting

arXiv.org Artificial Intelligence

Large-scale, pre-trained neural networks have demonstrated strong capabilities in various tasks, including zero-shot image segmentation. To identify concrete objects in complex scenes, humans instinctively rely on deictic descriptions in natural language, i.e., referring to something depending on the context such as "The object that is on the desk and behind the cup.". However, deep learning approaches cannot reliably interpret such deictic representations due to their lack of reasoning capabilities in complex scenarios. To remedy this issue, we propose DeiSAM -- a combination of large pre-trained neural networks with differentiable logic reasoners -- for deictic promptable segmentation. Given a complex, textual segmentation description, DeiSAM leverages Large Language Models (LLMs) to generate first-order logic rules and performs differentiable forward reasoning on generated scene graphs. Subsequently, DeiSAM segments objects by matching them to the logically inferred image regions. As part of our evaluation, we propose the Deictic Visual Genome (DeiVG) dataset, containing paired visual input and complex, deictic textual prompts. Our empirical results demonstrate that DeiSAM is a substantial improvement over purely data-driven baselines for deictic promptable segmentation.


Pix2Code: Learning to Compose Neural Visual Concepts as Programs

arXiv.org Artificial Intelligence

The challenge in learning abstract concepts from images in an unsupervised fashion lies in the required integration of visual perception and generalizable relational reasoning. Moreover, the unsupervised nature of this task makes it necessary for human users to be able to understand a model's learnt concepts and potentially revise false behaviours. To tackle both the generalizability and interpretability constraints of visual concept learning, we propose Pix2Code, a framework that extends program synthesis to visual relational reasoning by utilizing the abilities of both explicit, compositional symbolic and implicit neural representations. This is achieved by retrieving object representations from images and synthesizing relational concepts as lambda-calculus programs. We evaluate the diverse properties of Pix2Code on the challenging reasoning domains, Kandinsky Patterns and CURI, thereby testing its ability to identify compositional visual concepts that generalize to novel data and concept configurations. Particularly, in stark contrast to neural approaches, we show that Pix2Code's representations remain human interpretable and can be easily revised for improved performance.


Neural Meta-Symbolic Reasoning and Learning

arXiv.org Artificial Intelligence

Deep neural learning uses an increasing amount of computation and data to solve very specific problems. By stark contrast, human minds solve a wide range of problems using a fixed amount of computation and limited experience. One ability that seems crucial to this kind of general intelligence is meta-reasoning, i.e., our ability to reason about reasoning. To make deep learning do more from less, we propose the first neural meta-symbolic system (NEMESYS) for reasoning and learning: meta programming using differentiable forward-chaining reasoning in first-order logic. Differentiable meta programming naturally allows NEMESYS to reason and learn several tasks efficiently. This is different from performing object-level deep reasoning and learning, which refers in some way to entities external to the system. In contrast, NEMESYS enables self-introspection, lifting from object- to meta-level reasoning and vice versa. In our extensive experiments, we demonstrate that NEMESYS can solve different kinds of tasks by adapting the meta-level programs without modifying the internal reasoning system. Moreover, we show that NEMESYS can learn meta-level programs given examples. This is difficult, if not impossible, for standard differentiable logic programming


Structural Causal Models Reveal Confounder Bias in Linear Program Modelling

arXiv.org Artificial Intelligence

The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks. With this work we intend on posing and investigating the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classical classification tasks. Specifically, we investigate optimization problems as they constitute a fundamental part of modern AI research. To this end, we consider the base class of optimizers namely Linear Programs (LPs). On our initial attempt of a na\"ive mapping between the formalism of adversarial examples and LPs, we quickly identify the key ingredients missing for making sense of a reasonable notion of adversarial examples for LPs. Intriguingly, the formalism of Pearl's notion to causality allows for the right description of adversarial like examples for LPs. Characteristically, we show the direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimization, which ultimately exposes a notion of confounding in LPs (inherited by said SCM) that allows for adversarial-style attacks. We provide both the general proof formally alongside existential proofs of such intriguing LP-parameterizations based on SCM for three combinatorial problems, namely Linear Assignment, Shortest Path and a real world problem of energy systems.