Zippora Arzi-Gonczarowski Typographics, Ltd., 46 Hehalutz St. Jerusalem 96222 Israel Email email@example.com Abstract We show how emotions can be naturally and usefully integrated into artificial cognitive perceptions. The mathematical infrastructure consists of a category of'artificial perceptions'. Each'perception' consists of a set of'world elements', a set of'connotations' that stand for embodied sensations, and a three valued predicative connection between the two sets. This categorical architecture calls for an increment in the form of a setting for emotive reactions to sensations whenever consulting with higher-level reasoning processes would be impractical. The setting for emotive reactions is conveniently provided by the'connotations' that stand for primitive impressions of environmental elements. These sense connotations provide a natural grounding link between the sensitive and the sensible aspects of intelligence. Once emotive reactions are entered into the framework, affection pervade and increment the formal model at many levels and becomes an inseparable component of all related cognitive processes, either internal to a single perceiving artifact or interperceptual in a society of such artifacts. Introduction Intelligent artificial agents are typically situated in an external environment, where their intelligence is manifested in the way that they perceive that environment and interact with it.
Our architecture handles bounded resources by using primary emotions as the first filter for adjusting the priority of beliefs, thereby allowing the agents to speed up decision making. Secondary emotions are used to refine the decision when time permits. We present a sample EBDI agent for the Tileworld domain in order to show our architecture might be used.
The field of Artificial Intelligence has, for a long time, neglected the role of emotions in human cognition, with few but notable exceptions. This has been motivated in part by the assumption that the emulation of human rationality by a machine is sufficient for attaining general human-level intelligence. This paper reviews neuroscientific results showing empirical evidence, consistently for over a decade, sustaining that emotion mechanisms in the brain play a fundamental role in decision making processes, as well as in cognitive regulation. Moreover, this role takes place regardless of whether the subject is aware of any emotion. These mechanisms are particularly important in social contexts. Lesions in the pathways supporting these mechanisms provoke serious impairments on social behavior. For instance, subjects with lesions in the pathways between the orbitofrontal cortex and the amygdala are no longer able to sustain an healthy social live, despite their intact intellectual capabilities. Strikingly, these patients are even able to verbally describe what would be the proper social behavior, although are unable to follow it. One important mechanism in social contexts is empathy, fundamental for proper social relations. It has been proposed that empathy is founded on mechanisms analogous to the mirror neurons.
Through the use of a multi-agent system composed of emotionally enhanced sets of agents, we investigate how emotion plays a role in inter-agent communication, cooperation, goal achievement and perception. The behaviors of each agent are derived from its perception of the environment, its goals, knowledge gained from other agents and based on its own current emotional state, which is predicated on fuzzy sets. The state of each interacting agent is determined by its level of frustration combined with its interaction with other agents. The set of actions an agent may perform including that of its own ability to sense and understand the environment is limited by its current level of emotional context. In this work we focus on the analysis of the interaction of agents and review the grouping effect that arises from the inter-agent communications combined with the intra-agent emotional status as they perform the task required by the environment. The environment is based on previous work performed on navigational map learning by context chaining and abstraction.
Recent research has demonstrated that emotion plays a key role in human decision making. Across a wide range of disciplines, old concepts, such as the classical ``rational actor" model, have fallen out of favor in place of more nuanced models (e.g., the frameworks of behavioral economics and emotional intelligence) that acknowledge the role of emotions in analyzing human actions. We now know that context, framing, and emotional and physiological state can all drastically influence decision making in humans. Emotions serve an essential, though often overlooked, role in our lives, thoughts, and decisions. However, it is not clear how and to what extent emotions should impact the design of artificial agents, such as social robots. In this paper I argue that enabling robots, especially those intended to interact with humans, to sense and model emotions will improve their performance across a wide variety of human-interaction applications. I outline two broad research topics (affective inference and learning from affect) towards which progress can be made to enable ``affect-aware" robots and give a few examples of applications in which robots with these capabilities may outperform their non-affective counterparts. By identifying these important problems, both necessary for fully affect-aware social robots, I hope to clarify terminology, assess the current research landscape, and provide goalposts for future research.