Goto

Collaborating Authors

 Pedersen, Arthur Paul


Strengthening Consistency Results in Modal Logic

arXiv.org Artificial Intelligence

Many treatments of epistemological paradoxes in modal logic proceed along the following lines. Begin with some enumeration of assumptions that are individually plausible but when taken together fail to be jointly consistent (or at any rate fail to stand to reason in some way). Thereupon proceed to propose a resolution to the emerging paradox that identifies one or more assumptions that may be comfortably discarded or weakened and that in the presence of the remaining assumptions circumvents the troubling inconsistency defining the paradox [11] (cf. Chow [8] and de Vos et al. [16]). Typical among such assumptions are logical standards expressed in the form of inference rules and axioms pertaining to knowledge and belief, such as axiom scheme K -- that is to say, the distributive axiom scheme of the form K( ϕ ψ) (K ϕ K ψ). The choice of precisely which assumptions to temper can, at times, have an element of arbitrariness to it, especially when the choice is made from among several independent alternatives underpinning distinct resolutions in the absence of clear criteria or compelling grounds for distinguishing among them.


Representation and Invariance in Reinforcement Learning

arXiv.org Artificial Intelligence

If we changed the rules, would the wise trade places with the fools? Different groups formalize reinforcement learning (RL) in different ways. If an agent in one RL formalization is to run within another RL formalization's environment, the agent must first be converted, or mapped. A criterion of adequacy for any such mapping is that it preserves relative intelligence. This paper investigates the formulation and properties of this criterion of adequacy. However, prior to the problem of formulation is, we argue, the problem of comparative intelligence. We compare intelligence using ultrafilters, motivated by viewing agents as candidates in intelligence elections where voters are environments. These comparators are counterintuitive, but we prove an impossibility theorem about RL intelligence measurement, suggesting such counterintuitions are unavoidable. Given a mapping between RL frameworks, we establish sufficient conditions to ensure that, for any ultrafilter-based intelligence comparator in the destination framework, there exists an ultrafilter-based intelligence comparator in the source framework such that the mapping preserves relative intelligence. We consider three concrete mappings between various RL frameworks and show that they satisfy these sufficient conditions and therefore preserve suitably-measured relative intelligence.


Adversarial Attacks in Cooperative AI

arXiv.org Artificial Intelligence

Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation. If intelligent agents are to interact and work together to solve complex problems, methods that counter non-cooperative behavior are needed to facilitate the training of multiple agents. This is the goal of cooperative AI. Recent work in adversarial machine learning, however, shows that models (e.g., image classifiers) can be easily deceived into making incorrect decisions. In addition, some past research in cooperative AI has relied on new notions of representations, like public beliefs, to accelerate the learning of optimally cooperative behavior. Hence, cooperative AI might introduce new weaknesses not investigated in previous machine learning research. In this paper, our contributions include: (1) arguing that three algorithms inspired by human-like social intelligence introduce new vulnerabilities, unique to cooperative AI, that adversaries can exploit, and (2) an experiment showing that simple, adversarial perturbations on the agents' beliefs can negatively impact performance. This evidence points to the possibility that formal representations of social behavior are vulnerable to adversarial attacks.