bringsjord
Taking Principles Seriously: A Hybrid Approach to Value Alignment in Artificial Intelligence
Kim, Tae Wan (Carnegie Mellon University) | Hooker, John | Donaldson, Thomas
An important step in the development of value alignment (VA) systems in artificial intelligence (AI) is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing “naturalistic fallacy,” which is an attempt to derive “ought” from “is,” and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified modal logic, we precisely formulate principles derived from deontological ethics and show how they imply particular “test propositions” for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles. This article is part of the special track on AI and Society.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.68)
- Transportation > Ground > Road (0.67)
- (2 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.89)
AI Can Stop Mass Shootings, and More
Bringsjord, Selmer, Govindarajulu, Naveen Sundar, Giancola, Michael
We propose to build directly upon our longstanding, prior r&d in AI/machine ethics in order to attempt to make real the blue-sky idea of AI that can thwart mass shootings, by bringing to bear its ethical reasoning. The r&d in question is overtly and avowedly logicist in form, and since we are hardly the only ones who have established a firm foundation in the attempt to imbue AI's with their own ethical sensibility, the pursuit of our proposal by those in different methodological camps should, we believe, be considered as well. We seek herein to make our vision at least somewhat concrete by anchoring our exposition to two simulations, one in which the AI saves the lives of innocents by locking out a malevolent human's gun, and a second in which this malevolent agent is allowed by the AI to be neutralized by law enforcement. Along the way, some objections are anticipated, and rebutted.
- Europe > Germany > Berlin (0.05)
- North America > United States > Texas (0.04)
- South America > Brazil > Rio Grande do Norte > Natal (0.04)
- (8 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
Taking Principles Seriously: A Hybrid Approach to Value Alignment
Kim, Tae Wan, Hooker, John, Donaldson, Thomas
An important step in the development of value alignment (VA) systems in AI is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing the "naturalistic fallacy," which is an attempt to derive "ought" from "is," and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified model logic, we precisely formulate principles derived from deontological ethics and show how they imply particular "test propositions" for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > Pennsylvania (0.04)
- (3 more...)
- Law (1.00)
- Health & Medicine > Therapeutic Area (0.68)
- Transportation > Ground > Road (0.67)
- (2 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.89)
Towards Concise, Machine-discovered Proofs of G\"odel's Two Incompleteness Theorems
Malaby, Elijah, Dragun, Bradley, Licato, John
There is an increasing interest in applying recent advances in AI to automated reasoning, as it may provide useful heuristics in reasoning over formalisms in first-order, second-order, or even meta-logics. To facilitate this research, we present MATR, a new framework for automated theorem proving explicitly designed to easily adapt to unusual logics or integrate new reasoning processes. MATR is formalism-agnostic, highly modular, and programmer-friendly. We explain the high-level design of MATR as well as some details of its implementation. To demonstrate MATR's utility, we then describe a formalized metalogic suitable for proofs of G\"odel's Incompleteness Theorems, and report on our progress using our metalogic in MATR to semi-autonomously generate proofs of both the First and Second Incompleteness Theorems.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Florida (0.04)
- Europe > Portugal > Porto > Porto (0.04)
Learning $\textit{Ex Nihilo}$
Bringsjord, Selmer, Govindarajulu, Naveen Sundar
This paper introduces, philosophically and to a degree formally, the novel concept of learning $\textit{ex nihilo}$, intended (obviously) to be analogous to the concept of creation $\textit{ex nihilo}$. Learning $\textit{ex nihilo}$ is an agent's learning "from nothing," by the suitable employment of schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a $\textit{cognitive calculus}$ (i.e., roughly, an intensional higher-order multi-operator quantified logic), and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Learning $\textit{ex nihilo}$ is a challenge to contemporary forms of ML, indeed a severe one, but the challenge is offered in the spirt of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning-$\textit{ex nihilo}$ frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic interoperation of state-of-the-art automated reasoners and high-expressivity planners, with statistical/connectionist ML technology.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (14 more...)
Artificial intelligence could impact half of jobs in NYS
When a class in Mandarin Chinese starts next summer at Rensselaer Polytechnic Institute, students will be practicing their spoken dialogues with a different sort of teaching assistant: an artificial intelligence chatbot. Capable of conversing with students in simulated settings -- a restaurant, garden or even a Tai Chi class -- the bot is part of a future where artificial intelligence (AI) will perform more of the tasks, and potentially the jobs, now done by humans. Part of a so-called "situations room" at RPI, the chatbot is an example of what are called "cognitive and immersive systems," in which the burgeoning field of AI is melded with rapidly growing torrents of financial, health and education information as well as so-called "unstructured data" like social media posts spreading across an expanding constellation of networked computers, smartphones and other electronic devices. RPI is developing the room under a partnership with the technology giant IBM and its supercomputer Watson, which first gained worldwide attention in 2011 when it beat humans in the TV game show "Jeopardy." It's too early to predict how much impact AI will have on how New Yorkers work, but a recent report by the Albany-based Rockefeller Institute of Government projects that large numbers of jobs being replaced or changed -- particularly in jobs that involve basic, repetitive actions.
- Law (0.98)
- Government > Regional Government > North America Government > United States Government (0.97)
Toward Cognitive and Immersive Systems: Experiments in a Cognitive Microworld
Peveler, Matthew, Govindarajulu, Naveen Sundar, Bringsjord, Selmer, Srivastava, Biplav, Talamadupula, Kartik, Su, Hui
As computational power has continued to increase, and sensors have become more accurate, the corresponding advent of systems that are at once cognitive and immersive has arrived. These \textit{cognitive and immersive systems} (CAISs) fall squarely into the intersection of AI with HCI/HRI: such systems interact with and assist the human agents that enter them, in no small part because such systems are infused with AI able to understand and reason about these humans and their knowledge, beliefs, goals, communications, plans, etc. We herein explain our approach to engineering CAISs. We emphasize the capacity of a CAIS to develop and reason over a `theory of the mind' of its human partners. This capacity entails that the AI in question has a sophisticated model of the beliefs, knowledge, goals, desires, emotions, etc.\ of these humans. To accomplish this engineering, a formal framework of very high expressivity is needed. In our case, this framework is a \textit{cognitive event calculus}, a particular kind of quantified multi-operator modal logic, and a matching high-expressivity automated reasoner and planner. To explain, advance, and to a degree validate our approach, we show that a calculus of this type satisfies a set of formal requirements, and can enable a CAIS to understand a psychologically tricky scenario couched in what we call the \textit{cognitive polysolid framework} (CPF). We also formally show that a room that satisfies these requirements can have a useful property we term \emph{expectation of usefulness}. CPF, a sub-class of \textit{cognitive microworlds}, includes machinery able to represent and plan over not merely blocks and actions (such as seen in the primitive `blocks worlds' of old), but also over agents and their mental attitudes about both other agents and inanimate objects.
- North America > United States > California > Santa Clara County > Palo Alto (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- (18 more...)
Toward the Engineering of Virtuous Machines
Govindarajulu, Naveen Sundar, Bringsjord, Selmer, Ghosh, Rikhiya
While various traditions under the 'virtue ethics' umbrella have been studied extensively and advocated by ethicists, it has not been clear that there exists a version of virtue ethics rigorous enough to be a target for machine ethics (which we take to include the engineering of an ethical sensibility in a machine or robot itself, not only the study of ethics in the humans who might create artificial agents). We begin to address this by presenting an embryonic formalization of a key part of any virtue-ethics theory: namely, the learning of virtue by a focus on exemplars of moral virtue. Our work is based in part on a computational formal logic previously used to formally model other ethical theories and principles therein, and to implement these models in artificial agents.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.29)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (3 more...)
Tentacular Artificial Intelligence, and the Architecture Thereof, Introduced
Bringsjord, Selmer, Govindarajulu, Naveen Sundar, Sen, Atriya, Peveler, Matthew, Srivastava, Biplav, Talamadupula, Kartik
We briefly introduce herein a new form of distributed, multi-agent artificial intelligence, which we refer to as "tentacular." Tentacular AI is distinguished by six attributes, which among other things entail a capacity for reasoning and planning based in highly expressive calculi (logics), and which enlists subsidiary agents across distances circumscribed only by the reach of one or more given networks.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Oceania > Australia > Victoria > Melbourne (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (11 more...)
- Information Technology (1.00)
- Transportation (0.94)
The 1996 Simon Newcomb Award
His proofs are ingenious, cleverly argued, quite convincing to many of his contemporaries, and utterly wrong. The Simon Newcomb Award is given annually for the silliest published argument attacking AI. Our subject may be unique in the virulence and frequency with which it is attacked, both in the popular media and among the cultured intelligentsia. Recent articles have argued that the very idea of AI reflects a cancer in the heart of our culture and have proven (yet again) that it is impossible. While many of these attacks are cited widely, most of them are ridiculous to anyone with an appropriate technical education.