Emotions exert profound influences on cognition in biological agents. This is particularly evident in decisionmaking. All of the processes mediating decision-making are affected by emotion: attention, perception, situation assessment, goal-management and action selection, as well as the associated memory processes. Emotion effects, and the associated affective decision biases and heuristics, can be adaptive or maladaptive, depending on their type, magnitude and context. For example, anxiety and fear are associated with preferential processing of high-threat stimuli. This is highly adaptive in situations where survival depends on quick detection of danger and appropriate reaction (e.g., avoid an approaching car that has swerved into your lane). The same bias can be maladaptive if neutral stimuli are judged to be threatening (e.g., a passing car is assumed to be on a collision course and causes you to swerve into a ditch.)
Appraisal processes provide an affective assessment of an agent's current situation, in light of its needs and goals. This paper describes a computational model of the appraisal process, implemented within the broader context of a cognitive agent architecture. A particular focus here is on modeling the interacting influences of states and traits on perception and cognition, including their effects on the appraisal process itself. These effects are modeled by manipulating a series of architecture parameters, such as the speed and processing capacity of the individual modules. The paper presents results of an evaluation experiment modeling the behavior of three types of agents: 'normal', 'anxious', and'aggressive'. The appraisal model generated different affective appraisals of the same set of external circumstances for the different agent types, resulting in distinct emotions, and eventually leading to observable differences in behavior. The paper concludes with a brief discussion of some of the issues encountered during the appraisal model development.
The ability to regulate one's affective state and to induce an affective state in another person is a key component of social and emotional interaction. In this paper we describe an architecture design aimed at modeling these two closely related processes. We first define the high-level functional components of each process, and then use these to refine the specific knowledge necessary for their implementation. Based on this analysis we propose a cognitive architecture capable of modeling affect regulation in the self, and affect induction in the other. We illustrate the functioning of this architecture by way of an example from a peacekeeping training simulation. The architecture is currently being implemented in the context of this simulation training task.
A key challenge in creating simulated agents is to produce sufficiently realistic behavior. A critical component of such realism is the range of variations in behaviors exhibited by humans. Whether these be'leaps of genius', surprising reactions, specific biases, suboptimal behaviors, or simply errors, these inconsistencies and idiosyncracies are quintessential human qualities. These variations are due to a variety of factors, including varying levels of intelligence and skill, differences in cognitive and decision making styles, personality differences, and differences in specific affective states and moods. Collectively, these factors are termed individual differences.
The past 15 years have witnessed a rapid growth in computational models of emotion and affective architectures. Researchers in cognitive science, AI, HCI, robotics, and gaming are developing'models of emotion' for theoretical research regarding the nature of emotion, as well as a range of applied purposes: to create more believable and effective synthetic characters and robots, and to enhance human-computer interaction. Yet in spite of the many stand-alone emotion models, and the numerous affective agent and robot architectures developed to date, there is a lack of consistency, and lack of clarity, regarding what exactly it means to'model emotions'. 'Emotion modeling' can mean the dynamic generation of emotion via black-box models that map specific stimuli onto associated emotions. It can mean generating facial expressions, gestures, or movements depicting specific emotions in synthetic agents or robots.