Goto

Collaborating Authors

Simulation of Human Behavior


CES 2021: LG's press conference featured a virtual person presenting

USATODAY - Tech Top Stories

Typically the presenters at a CES press conference don't get a lot of attention. Wearing a pink hooded sweatshirt with the phrase "Stay punk forever," Reah Keem was among presenters highlighting some of the offerings from LG, ranging from appliances to personal technology. LG describes her as a "virtual composer and DJ made even more human through deep learning technology." Keem was there to introduce the LG CLOi robot, which can disinfect high-traffic areas using ultraviolet light. You can watch Reah make her debut during LG's press conference Monday morning, at roughly the 22-minute mark.


A 'virtual human' presented some of LG's CES event

Engadget

CES has gone all-digital for the first time this year. Unsurprisingly, some companies are using the format change to experiment with their live-streamed press conferences. LG, for instance, used an entirely virtual human called Reah Keem to promote some of its products today. Sporting a hoodie with the slogan "stay punk forever," she explained that travel was an important part of her life, and how desperate she was to roam around the world and perform once again. Keep used those wishes to transition into the LG CLOi UV-C Robot, an already-announced machine that uses ultraviolet light to disinfect public and generally popular areas.


Overcome your cognitive biases with this set of online classes

Mashable

TL;DR: The Mastering Cognitive Biases Bundle is on sale for £22.16 as of Jan. 4, saving you 96% on list price. As humans, we're forced to make lots of decisions on a daily basis. And while we may think we make every single decision based on facts, logic, and reasoning, there's a lot more to it than that. As it turns out, we kind of suck at the whole decision-making process due to our own cognitive biases. Discovered by two dudes named Daniel Kahneman and Amos Tversky in the 1970s, cognitive biases are basically like mental shortcuts or rules that simplify the decision-making process.


The Human Effect Requires Affect: Addressing Social-Psychological Factors of Climate Change with Machine Learning

arXiv.org Artificial Intelligence

Machine learning has the potential to aid in mitigating the human effects of climate change. Previous applications of machine learning to tackle the human effects in climate change include approaches like informing individuals of their carbon footprint and strategies to reduce it. For these methods to be the most effective they must consider relevant social-psychological factors for each individual. Of social-psychological factors at play in climate change, affect has been previously identified as a key element in perceptions and willingness to engage in mitigative behaviours. In this work, we propose an investigation into how affect could be incorporated to enhance machine learning based interventions for climate change. We propose using affective agent-based modelling for climate change as well as the use of a simulated climate change social dilemma to explore the potential benefits of affective machine learning interventions. Behavioural and informational interventions can be a powerful tool in helping humans adopt mitigative behaviours. We expect that utilizing affective ML can make interventions an even more powerful tool and help mitigative behaviours become widely adopted.


Lincoln Laboratory establishes Biotechnology and Human Systems Division

#artificialintelligence

MIT Lincoln Laboratory has established a new research and development division, the Biotechnology and Human Systems Division. The division will address emerging threats to both national security and humanity. Research and development will encompass advanced technologies and systems for improving chemical and biological defense, human health and performance, and global resilience to climate change, conflict, and disasters. "We strongly believe that research and development in biology, biomedical systems, biological defense, and human systems is a critically important part of national and global security. The new division will focus on improving human conditions on many fronts," says Eric Evans, Lincoln Laboratory director.


The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

#artificialintelligence

Written by Camylle Lanteigne (@CamLante), who's currently pursuing a Master's in Public Policy at Concordia University and whose work on social robots and empathy has been featured on Vox. This explainer was written in response to colleagues' requests to know more about temporal bias in AI ethics. It begins with a refresher on cognitive biases, then dives into: how humans understand time, time preferences, present-day preference, confidence changes, planning fallacies, and hindsight bias. Bias is a really big topic, but I'll try to succinctly define a subsection of it--implicit cognitive bias--in a way that is useful for AI ethics, particularly. Humans have cognitive biases, which means every one of us, to varying degrees, holds beliefs and impressions that are not backed up by fleshed out reasoning or evidence, or that we never bothered questioning in the first place.¹


A General Context-Aware Framework for Improved Human-System Interactions

AI Magazine

For humans and automation to effectively collaborate and perform tasks, all participants need access to a common representation of potentially relevant situational information, or context. This article describes a general framework for building context-aware interactive intelligent systems that comprises three major functions: (1) capture human-system interactions and infer implicit context; (2) analyze and predict user intent and goals; and (3) provide effective augmentation or mitigation strategies to improve performance, such as delivering timely, personalized information and recommendations, adjusting levels of automation, or adapting visualizations. Our goal is to develop an approach that enables humans to interact with automation more intuitively and naturally that is reusable across domains by modeling context and algorithms at a higher-level of abstraction. We first provide an operational definition of context and discuss challenges and opportunities for exploiting context. We then describe our current work towards a general platform that supports developing context-aware applications in a variety of domains.


Natural Language Understanding (NLU, not NLP) in Cognitive Systems

AI Magazine

Developing cognitive agents with human-level natural language understanding (NLU) capabilities requires modeling human cognition because natural, unedited utterances regularly contain ambiguities, ellipses, production errors, implicatures, and many other types of complexities. Moreover, cognitive agents must be nimble in the face of incomplete interpretations since even people do not perfectly understand every aspect of every utterance they hear. So, once an agent has reached the best interpretation it can, it must determine how to proceed – be that acting upon the new information directly, remembering an incomplete interpretation and waiting to see what happens next, seeking out information to fill in the blanks, or asking its interlocutor for clarification. The reasoning needed to support NLU extends far beyond language itself, including, non-exhaustively, the agent's understanding of its own plans and goals; its dynamic modeling of its interlocutor's knowledge, plans, and goals, all guided by a theory of mind; its recognition of diverse aspects human behavior, such as affect, cooperative behavior, and the effects of cognitive biases; and its integration of linguistic interpretations with its interpretations of other perceptive inputs, such as simulated vision and non-linguistic audition. Considering all of these needs, it seems hardly possible that fundamental NLU will ever be achieved through the kinds of knowledge-lean text-string manipulation being pursued by the mainstream natural language processing (NLP) community. Instead, it requires a holistic approach to cognitive modeling of the type we are pursuing in a paradigm called OntoAgent.


CNN's Don Lemon claims Trump voters must have 'cognitive dissonance' to support such a 'bad person'

FOX News

Don Lemon reacts to President Trump's RNC speech, points blame at Trump voters. CNN anchor Don Lemon went after Trump voters yet again following the president's speech at the Republican National Convention Thursday, saying they must suffer from "cognitive dissonance" to support someone Lemon described as a "bad person." Lemon's colleague Chris Cuomo had told him that the president's supporters had concluded that despite Trump's flaws, "Joe Biden will be worse" for the country. Cuomo theorized that Trump's voters are willing to "forgive" Trump's wrongdoings rather than vote for the Democrat. "I think you're letting them off easy," Lemon responded.


Getting to Know One Another: Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration

arXiv.org Artificial Intelligence

Common experience suggests that agents who know each other well are better able to work together. In this work, we address the problem of calibrating intention and capabilities in human-robot collaboration. In particular, we focus on scenarios where the robot is attempting to assist a human who is unable to directly communicate her intent. Moreover, both agents may have differing capabilities that are unknown to one another. We adopt a decision-theoretic approach and propose the TICC-POMDP for modeling this setting, with an associated online solver. Experiments show our approach leads to better team performance both in simulation and in a real-world study with human subjects.