Goto

Collaborating Authors

Spaulding

AAAI Conferences

Recent research has demonstrated that emotion plays a key role in human decision making. Across a wide range of disciplines, old concepts, such as the classical rational actor" model, have fallen out of favor in place of more nuanced models (e.g., the frameworks of behavioral economics and emotional intelligence) that acknowledge the role of emotions in analyzing human actions. We now know that context, framing, and emotional and physiological state can all drastically influence decision making in humans. Emotions serve an essential, though often overlooked, role in our lives, thoughts, and decisions. However, it is not clear how and to what extent emotions should impact the design of artificial agents, such as social robots. In this paper I argue that enabling robots, especially those intended to interact with humans, to sense and model emotions will improve their performance across a wide variety of human-interaction applications. I outline two broad research topics (affective inference and learning from affect) towards which progress can be made to enable affect-aware" robots and give a few examples of applications in which robots with these capabilities may outperform their non-affective counterparts. By identifying these important problems, both necessary for fully affect-aware social robots, I hope to clarify terminology, assess the current research landscape, and provide goalposts for future research.


A Simulated Emotional Expression Robot (SEER)

#artificialintelligence

"SEER" is a compact humanoid robot developed as results of deep research on gaze and human facial expression. The robot is able to focus the gaze directions on a certain point, without being fooled by the movement of the neck. As a result, the robot seems as if it has its own intentions in following and paying attention to its surrounding people and environment. Using a camera censor, whilst tracking eyes it has interactive gaze. In addition, by drawing the curve of the eyebrow using soft elastic wire, I were able to enrich the expression of the robot as if it lives with emotions.


DeepMind's new robots learned how to teach themselves

#artificialintelligence

The minute hand on the robot apocalypse clock just inched a little closer to midnight. DeepMind, the Google sister-company responsible for the smartest AI on the planet, just taught machines how to figure things out for themselves.


Erdem

AAAI Conferences

We propose the use of causality-based formal representation and automated reasoning methods to endow multiple teams of robots in a factory, with high-level cognitive capabilities, such as, optimal planning and diagnostic reasoning. We introduce algorithms for finding optimal decoupled plans and diagnosing the cause of a failure/discrepancy (e.g., robots may get broken or tasks may get reassigned to teams). We discuss how these algorithms can be embedded in an execution and monitoring framework, and show their applicability on an intelligent painting factory scenario.


Strong AI

#artificialintelligence

Strong Artificial Intelligence (AI) is a type of machine intelligence that is equivalent to human intelligence. Key characteristics of strong AI include the ability to reason, solve puzzles, make judgments, plan, learn, and communicate. It should also have consciousness, objective thoughts, self-awareness, sentience, and sapience. Strong AI is also called True Intelligence or Artificial General Intelligence (AGI). Strong AI does not currently exist.