This special track will serve as a forum to unite researchers from the interdisciplinary arena that encompasses computer science, engineering, HCI, psychology, and education to exchange ideas, frameworks, methods, and tools relating to affective computing. Although the last decade has been ripe with theory and applications relevant to AC, these advances are accompanied by a new set of challenges. By providing a framework to discuss and evaluate novel research, we hope to leverage recent advances to speed-up future research in this area.
Our research has contributed to: (1) Designing new ways for people to communicate affective-cognitive states, especially through creation of novel wearable sensors and new machine learning algorithms that jointly analyze multimodal channels of information; (2) Creating new techniques to assess frustration, stress, and mood indirectly, through natural interaction and conversation; (3) Showing how computers can be more emotionally intelligent, especially responding to a person's frustration in a way that reduces negative feelings; (4) Inventing personal technologies for improving self-awareness of affective state and its selective communication to others; (5) Increasing understanding of how affect influences personal health; and (6) Pioneering studies examining ethical issues in affective computing.
Recent research has demonstrated that emotion plays a key role in human decision making. Across a wide range of disciplines, old concepts, such as the classical ``rational actor" model, have fallen out of favor in place of more nuanced models (e.g., the frameworks of behavioral economics and emotional intelligence) that acknowledge the role of emotions in analyzing human actions. We now know that context, framing, and emotional and physiological state can all drastically influence decision making in humans. Emotions serve an essential, though often overlooked, role in our lives, thoughts, and decisions. However, it is not clear how and to what extent emotions should impact the design of artificial agents, such as social robots. In this paper I argue that enabling robots, especially those intended to interact with humans, to sense and model emotions will improve their performance across a wide variety of human-interaction applications. I outline two broad research topics (affective inference and learning from affect) towards which progress can be made to enable ``affect-aware" robots and give a few examples of applications in which robots with these capabilities may outperform their non-affective counterparts. By identifying these important problems, both necessary for fully affect-aware social robots, I hope to clarify terminology, assess the current research landscape, and provide goalposts for future research.