Goto

Collaborating Authors

 Alexander, Samuel Allen


Strengthening Consistency Results in Modal Logic

arXiv.org Artificial Intelligence

Many treatments of epistemological paradoxes in modal logic proceed along the following lines. Begin with some enumeration of assumptions that are individually plausible but when taken together fail to be jointly consistent (or at any rate fail to stand to reason in some way). Thereupon proceed to propose a resolution to the emerging paradox that identifies one or more assumptions that may be comfortably discarded or weakened and that in the presence of the remaining assumptions circumvents the troubling inconsistency defining the paradox [11] (cf. Chow [8] and de Vos et al. [16]). Typical among such assumptions are logical standards expressed in the form of inference rules and axioms pertaining to knowledge and belief, such as axiom scheme K -- that is to say, the distributive axiom scheme of the form K( ฯ• ฯˆ) (K ฯ• K ฯˆ). The choice of precisely which assumptions to temper can, at times, have an element of arbitrariness to it, especially when the choice is made from among several independent alternatives underpinning distinct resolutions in the absence of clear criteria or compelling grounds for distinguishing among them.


Universal Agent Mixtures and the Geometry of Intelligence

arXiv.org Artificial Intelligence

Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is the weighted average of the original agents' intelligences. This operation enables various interesting new theorems that shed light on the geometry of RL agent intelligence, namely: results about symmetries, convex agent-sets, and local extrema. We also show that any RL agent intelligence measure based on average performance across environments, subject to certain weak technical conditions, is identical (up to a constant factor) to performance within a single environment dependent on said intelligence measure.



Reward-Punishment Symmetric Universal Intelligence

arXiv.org Artificial Intelligence

Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.


Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents

arXiv.org Artificial Intelligence

Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures.