Bringsjord, Selmer


Two Problems Afflicting the Search for a Standard Model of the Mind

AAAI Conferences

We describe two serious problems afflicting the search for astandard model of the mind (SMM), as carried out and prescribedby Laird, Lebiere, and Rosenbloom (LLR). The first problem concerns a glaring omission from SMM, while the second calls into question the evidentiary standards for convergence that motivates the entire SMM agenda. It may well be that neither problem is insuperable, even in the short term. On the other hand, both problems currently stand in theway of making any present pronouncements to the effect that a standard model (or substantive portion thereof) exists and can be used as a benchmark against which other researchers might compare their approaches. The pair of problems is offered in a spirit of collaboration, and in the hope that grappling with them will help move the search a bit closer to the sort of undisputed rigor and predictive power afforded by such models in physics. Our order of business in the sequel is straightforward: we present and briefly discuss each of the two problems in turn, and wrap up with some remarks regarding whether or not these problems can be surmounted, and if so, how.


On Automating the Doctrine of Double Effect

arXiv.org Artificial Intelligence

The doctrine of double effect ($\mathcal{DDE}$) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate $\mathcal{DDE}$. We briefly present $\mathcal{DDE}$, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build $\mathcal{DDE}$-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is $\mathcal{DDE}$-compliant, by applying a $\mathcal{DDE}$ layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the $\mathcal{DDE}$ layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our $\mathcal{DDE}$ layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.


The 2015 AAAI Fall Symposium Series Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence presented the 2015 Fall Symposium Series, on Thursday through Saturday, November 12-14, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the six symposia were as follows: AI for Human-Robot Interaction, Cognitive Assistance in Government and Public Sector Applications, Deceptive and Counter-Deceptive Machines, Embedded Machine Learning, Self-Confidence in Autonomous Systems, and Sequential Decision Making for Intelligent Agents. This article contains the reports from four of the symposia.


The 2015 AAAI Fall Symposium Series Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence presented the 2015 Fall Symposium Series, on Thursday through Saturday, November 12-14, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the six symposia were as follows: AI for Human-Robot Interaction, Cognitive Assistance in Government and Public Sector Applications, Deceptive and Counter-Deceptive Machines, Embedded Machine Learning, Self-Confidence in Autonomous Systems, and Sequential Decision Making for Intelligent Agents. This article contains the reports from four of the symposia.


Can Accomplices to Fraud Will Themselves to Innocence, and Thereby Dodge Counter-Fraud Machines?

AAAI Conferences

This brief paper explores the consequences of agnosticism with respect to whether a given human agent B is guilty of fraud. We find that if a human A is agnostic with respect to whether a human fraudster B is guilty of fraud, A, on the only formal definition of fraud that we are aware of, is her/himself provably not guilty of fraud. This means that a counter-fraud machine D based on an implemented version of this definition will classify A as innocent. Hence, if A by simply an act of will can bring it about that A is agnostic, A will evade D


Analogico-Deductive Generation of Gödel's First Incompleteness Theorem from the Liar Paradox

AAAI Conferences

Gödel's proof of his famous first incompleteness theorem (G1) has quite understandably long been a tantalizing target for those wanting to engineer impressively intelligent computational systems. After all, in establishing G1, Gödel didsomething that by any metric must be classified as stunningly intelligent. We observe that it has long been understood that there is some sort of analogical relationship between the Liar Paradox (LP) and G1, and that Gödel himself appreciated and exploited the relationship. Yet the exact nature of the relationship has hitherto not been uncovered, by which we mean that the following question has not been answered: Given a description of LP,and the suspicion that it may somehow be used by a suitably programmed computing machine to find a proof of the incompleteness of Peano Arithmetic, can such a machine, provided this description as input, produce as output a complete and verifiably correct proof of G1? In this paper, we summarize engineering that entails an affirmative answer to this question. Our approach uses what we call analogico-deductive reasoning (ADR), which combines analogical and deductive reasoning to produce a full deductive proof of G1 from LP. Our engineering uses a form of ADR based on our META-R system, and a connection between the Liar Sentence in LP and Gödel's Fixed Point Lemma, from which G1 follows quickly.


In Defense of the Neo-Piagetian Approach to Modeling and Engineering Human-Level Cognitive Systems

AAAI Conferences

Presumably any human-level cognitive system (HLCS) must have the capacity to: maintain and learn new concepts; believe propositions about its environment that are constructed from these concepts, and from what it perceives; reason over the propositions it believes, in order to among other things manipulate its environment and justify its significant decisions; and learn new concepts. Given this list of desiderata, it’s hard to see how any intelligent attempt to build or simulate a HLCS can avoid falling under a neo-Piagetian approach to engineering HLCSs. Unfortunately, such engineering has been discursively declared by Jerry Fodor to be flat-out impossible. After setting out Fodor’s challenges, we refute them and, inspired by those refutations, sketch our solutions on behalf of those wanting to computationally model and construct HLCSs, under neo-Piagetian assumptions.