Artificial Intelligence - foundations of computational agents

AITopics Original Links

This book is published by Cambridge University Press, 2010. The complete text and figures of the book are here, copyright David Poole and Alan Mackworth, 2010. The html is made available under a Creative Commons Attribution-Noncommercial-No Derivative Works 2.5 Canada License. We hope that you enjoy reading the book and that you get excited about the development of artificial intelligence.


The Computational Metaphor and Artificial Intelligence: A Reflective Examination of a Theoretical Falsework

AI Magazine

Advocates and critics of AI have long engaged in a debate that has generated a great deal of heat but little light. Whatever the merits of specific contributions to this ongoing debate, the fact that it continues points to the need for a reflective examination of the foundations of AI by its active practitioners. Following the lead of Earl MacCormac, we hope to advance such a reflective examination by considering questions of metaphor in science and the computational metaphor in AI. Specifically, we address three issues: the role of metaphor in science and AI, an examination of the computational metaphor, and an introduction to the possibility and potential value of using alternative metaphors as a foundation for AI theory.


Computational and neurobiological foundations of leadership decisions

Science

The driving forces behind people's choices to lead or follow are very important but largely unknown. We identify responsibility aversion as a key determinant of the willingness to lead. Moreover, it is predictive of both survey-based and real-life leadership scores. Individual differences in the perception of, and willingness to bear, responsibility as the price of leadership may determine who will strive toward leadership roles and, moreover, are associated with how well they perform as leaders. Our computational model provides a conceptual framework for the decision to assume responsibility for others' outcomes as well as insights into the cognitive and neural mechanisms driving this choice process.


Computational Logic Foundations of KGP Agents

AAAI Conferences

This paper presents the computational logic foundations of a model of agency called the KGP (Knowledge, Goals and Plan) model. This model allows the specification of heterogeneous agents that can interact with each other, and can exhibit both proactive and reactive behaviour allowing them to function in dynamic environments by adjusting their goals and plans when changes happen in such environments. KGP provides a highly modular agent architecture that integrates a collection of reasoning and physical capabilities, synthesised within transitions that update the agent's state in response to reasoning, sensing and acting. Transitions are orchestrated by cycle theories that specify the order in which transitions are executed while taking into account the dynamic context and agent preferences, as well as selection operators for providing inputs to transitions.


AI Common Sense Reasoning

#artificialintelligence

Today's machine learning systems are more advanced than ever, capable of automating increasingly complex tasks and serving as a critical tool for human operators. Despite recent advances, however, a critical component of Artificial Intelligence (AI) remains just out of reach – machine common sense. Defined as "the basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate," common sense forms a critical foundation for how humans interact with the world around them. Possessing this essential background knowledge could significantly advance the symbiotic partnership between humans and machines. But articulating and encoding this obscure-but-pervasive capability is no easy feat.