Goto

Collaborating Authors

Simulation of Human Behavior


Banks' secret to the best AI? Embracing their humanity - Industrious

#artificialintelligence

Have you used the customer service app with your bank the past year, or received an unexpected email with an offer you were actually interested in? Maybe it was a well-timed mortgage re-fi or even some savings at a favorite store. There's an enduring and unfortunate misperception that AI serves only to replace human workers and irritate human clients. But what if we don't have to be locked in a zero-sum game with the growing legion of digital intelligence? An increasing number of banks and insurers are finding that it's almost impossible to meet the rising customer expectations and needs that digital services and apps have unleashed.


An Objective Laboratory Protocol for Evaluating Cognition of Non-Human Systems Against Human Cognition

arXiv.org Artificial Intelligence

It is virtually impossible to tease apart human capabilities from human cultural and other background knowledge, so this is necessary to provide an objective point of comparison against humans. Furthermore, a comprehensive understanding of human background knowledge, sufficient to not only recall but apply that knowledge, tests the cognitive capabilities essential to the human kind of understanding. I have recommended that human respondents be drawn from broad populations to ensure that this cultural knowledge is least-common-denominator rather than esoteric. The graders might be able to tell that they are scoring a non-human subject system. Difficulties with the Turing Test have demonstrated that this is probably not an issue. It is a relatively easy task to fool humans into thinking they are interacting with a human, even without human-level cognitive capabilities. Mimicking human interaction styles, though again not necessarily a goal of the subject system, should not be difficult for a system with cognition that is comparable to that of humans. Nevertheless, the reason the protocol attempts to disguise which respondents are human or non-human is not because this contributes to the evaluation, but merely to avoid implicit bias in scoring. All the test questions are raster images - does this mean the system has to do handwriting recognition?


Synthesizing Skeletal Motion and Physiological Signals as a Function of a Virtual Human's Actions and Emotions

arXiv.org Artificial Intelligence

Round-the-clock monitoring of human behavior and emotions is required in many healthcare applications which is very expensive but can be automated using machine learning (ML) and sensor technologies. Unfortunately, the lack of infrastructure for collection and sharing of such data is a bottleneck for ML research applied to healthcare. Our goal is to circumvent this bottleneck by simulating a human body in virtual environment. This will allow generation of potentially infinite amounts of shareable data from an individual as a function of his actions, interactions and emotions in a care facility or at home, with no risk of confidentiality breach or privacy invasion. In this paper, we develop for the first time a system consisting of computational models for synchronously synthesizing skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals as a function of an open-ended set of actions and emotions. Our experimental evaluations, involving user studies, benchmark datasets and comparison to findings in the literature, show that our models can generate skeletal motion and physiological signals with high fidelity. The proposed framework is modular and allows the flexibility to experiment with different models. In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data, and may be used as a training tool for ML practitioners and healthcare professionals.


A Simple Way to Reduce Cognitive Bias - Facts So Romantic

Nautilus

Would you like to be more rational? Who doesn't want to behave and think more reasonably? Good news: New research, from Harvard psychologist Ellen Langer, suggests mindfulness, or at least an aspect of it, can help. By "mindfulness"--a feature of Buddhism for thousands of years, and a subject of scientific investigation for a few decades--most people mean a mental state you can be in. If you find yourself bringing past or possible future events into your imagination, let those drift off, and attend again to your present sensations, thoughts, and feelings. Being mindful for a few seconds is easy.


Choice Set Misspecification in Reward Inference

arXiv.org Artificial Intelligence

Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior. A promising alternative to manually specifying reward functions is to enable robots to infer them from human feedback, like demonstrations or corrections. To interpret this feedback, robots treat as approximately optimal a choice the person makes from a choice set, like the set of possible trajectories they could have demonstrated or possible corrections they could have made. In this work, we introduce the idea that the choice set itself might be difficult to specify, and analyze choice set misspecification: what happens as the robot makes incorrect assumptions about the set of choices from which the human selects their feedback. We propose a classification of different kinds of choice set misspecification, and show that these different classes lead to meaningful differences in the inferred reward and resulting performance. While we would normally expect misspecification to hurt, we find that certain kinds of misspecification are neither helpful nor harmful (in expectation). However, in other situations, misspecification can be extremely harmful, leading the robot to believe the opposite of what it should believe. We hope our results will allow for better prediction and response to the effects of misspecification in real-world reward inference.


CES 2021: LG's press conference featured a virtual person presenting

USATODAY - Tech Top Stories

Typically the presenters at a CES press conference don't get a lot of attention. Wearing a pink hooded sweatshirt with the phrase "Stay punk forever," Reah Keem was among presenters highlighting some of the offerings from LG, ranging from appliances to personal technology. LG describes her as a "virtual composer and DJ made even more human through deep learning technology." Keem was there to introduce the LG CLOi robot, which can disinfect high-traffic areas using ultraviolet light. You can watch Reah make her debut during LG's press conference Monday morning, at roughly the 22-minute mark.


A 'virtual human' presented some of LG's CES event

Engadget

CES has gone all-digital for the first time this year. Unsurprisingly, some companies are using the format change to experiment with their live-streamed press conferences. LG, for instance, used an entirely virtual human called Reah Keem to promote some of its products today. Sporting a hoodie with the slogan "stay punk forever," she explained that travel was an important part of her life, and how desperate she was to roam around the world and perform once again. Keep used those wishes to transition into the LG CLOi UV-C Robot, an already-announced machine that uses ultraviolet light to disinfect public and generally popular areas.


Adaptive Synthetic Characters for Military Training

arXiv.org Artificial Intelligence

Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment - Rapid Integration and Development Environment (RIDE) - supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations.


Overcome your cognitive biases with this set of online classes

Mashable

TL;DR: The Mastering Cognitive Biases Bundle is on sale for £22.16 as of Jan. 4, saving you 96% on list price. As humans, we're forced to make lots of decisions on a daily basis. And while we may think we make every single decision based on facts, logic, and reasoning, there's a lot more to it than that. As it turns out, we kind of suck at the whole decision-making process due to our own cognitive biases. Discovered by two dudes named Daniel Kahneman and Amos Tversky in the 1970s, cognitive biases are basically like mental shortcuts or rules that simplify the decision-making process.


The Human Effect Requires Affect: Addressing Social-Psychological Factors of Climate Change with Machine Learning

arXiv.org Artificial Intelligence

Machine learning has the potential to aid in mitigating the human effects of climate change. Previous applications of machine learning to tackle the human effects in climate change include approaches like informing individuals of their carbon footprint and strategies to reduce it. For these methods to be the most effective they must consider relevant social-psychological factors for each individual. Of social-psychological factors at play in climate change, affect has been previously identified as a key element in perceptions and willingness to engage in mitigative behaviours. In this work, we propose an investigation into how affect could be incorporated to enhance machine learning based interventions for climate change. We propose using affective agent-based modelling for climate change as well as the use of a simulated climate change social dilemma to explore the potential benefits of affective machine learning interventions. Behavioural and informational interventions can be a powerful tool in helping humans adopt mitigative behaviours. We expect that utilizing affective ML can make interventions an even more powerful tool and help mitigative behaviours become widely adopted.