Simulation of Human Behavior

Bringing Augmented Reality to life with 'virtual humans' using Artificial Intelligence – the mission of Scanta WRAL TechWire


Editor's note: This is the latest installment in an Uptech series of video interviews and accompanying transcripts about the emerging development and uses of Artificial Intelligence along with Machine Learning, and WRAL TechWire are working together to publish this series. Alexander Ferguson is the founder and CEO of YourLocalStudio. Artificial intelligence, machine learning: These emerging technologies are changing the way we live, work, and do business in the world for the better. How is AI actually being applied in business today, though? In this episode of UpTech Report, I interview Chaitanya Hiremath, who also goes by Chad.

Door and Doorway Etiquette for Virtual Humans. - PubMed - NCBI


We introduce a framework for simulating a variety of nontrivial, socially motivated behaviors that underlie the orderly passage of pedestrians through doorways, especially the common courtesy of opening and holding doors open for others, an important etiquette that has been overlooked in the literature on autonomous multi-human animation. Emulating such social activity requires serious attention to the interplay of visual perception, navigation in constrained doorway environments, manipulation of a variety of door types, and high-level decision making based on social considerations. To tackle this complex human simulation problem, we take an artificial life approach to modeling autonomous pedestrians, proposing a layered architecture comprising mental, behavioral, and motor layers. The behavioral layer couples two stages: (1) a decentralized, agent-based strategy for dynamically determining the well-mannered ordering of pedestrians around doorways, and (2) a state-based model that directs and coordinates a pedestrian's interactions with the door. The mental layer is a Bayesian network decision model that dynamically selects appropriate door holding behaviors by considering both internal and external social factors pertinent to pedestrians interacting with one another in and around doorways.

Towards a Quantum-Like Cognitive Architecture for Decision-Making Artificial Intelligence

We propose an alternative and unifying framework for decision-making that, by using quantum mechanics, provides more generalised cognitive and decision models with the ability to represent more information than classical models. This framework can accommodate and predict several cognitive biases reported in Lieder & Griffiths without heavy reliance on heuristics nor on assumptions of the computational resources of the mind. Expected utility theory and classical probabilities tell us what people should do if employing traditionally rational thought, but do not tell us what people do in reality (Machina, 2009). Under this principle, L&G propose an architecture for cognition that can serve as an intermediary layer between Neuroscience and Computation. Whilst instances where large expenditures of cognitive resources occur are theoretically alluded to, the model primarily assumes a preference for fast, heuristic-based processing.

Reward-Based Deception with Cognitive Bias Artificial Intelligence

Deception plays a key role in adversarial or strategic interactions for the purpose of self-defence and survival. This paper introduces a general framework and solution to address deception. Most existing approaches for deception consider obfuscating crucial information to rational adversaries with abundant memory and computation resources. In this paper, we consider deceiving adversaries with bounded rationality and in terms of expected rewards. This problem is commonly encountered in many applications especially involving human adversaries. Leveraging the cognitive bias of humans in reward evaluation under stochastic outcomes, we introduce a framework to optimally assign resources of a limited quantity to optimally defend against human adversaries. Modeling such cognitive biases follows the so-called prospect theory from behavioral psychology literature. Then we formulate the resource allocation problem as a signomial program to minimize the defender's cost in an environment modeled as a Markov decision process. We use police patrol hour assignment as an illustrative example and provide detailed simulation results based on real-world data.

Talespin's virtual human platform uses VR and AI to teach employees soft skills


Training employees how to perform specific tasks isn't difficult, but building their soft skills -- their interactions with management, fellow employees, and customers -- can be more challenging, particularly if there aren't people around to practice with. Virtual reality training company Talespin announced today that it is leveraging AI to tackle that challenge, using a new "virtual human platform" to create realistic simulations for employee training purposes. Unlike traditional employee training, which might consist of passively watching a video or lightly interacting with collections of canned multiple choice questions, Talespin's system has a trainee interact with a virtual human powered by AI, speech recognition, and natural language processing. Because the interactions use VR headsets and controllers, the hardware can track a trainee's gaze, body movement, and facial expressions during the session. Talespin's virtual character is able to converse realistically, guiding trainees through branching narratives using natural mannerisms and believable speech.

Virtual Humans


There is an interesting move underway to establish a pan-European AI research federation - a sort of decentralised CERN for AI. From their website: "CLAIRE is an initiative by the European AI community that seeks to strengthen European excellence in AI research and innovation. To achieve this, CLAIRE proposes the establishment of a pan-European Confederation of Laboratories for Artificial Intelligence Research in Europe that achieves "brand recognition" similar to CERN." "The CLAIRE initiative aims to establish a pan-European network of Centres of Excellence in AI, strategically located throughout Europe, and a new, central facility with state-of-the-art, "Google-scale", CERN-like infrastructure – the CLAIRE Hub – that will promote new and existing talent and provide a focal point for exchange and interaction of researchers at all stages of their careers, across all areas of AI. The CLAIRE Hub will not be an elitist AI institute with permanent scientific staff, but an environment where Europe's brightest minds in AI meet and work for limited periods of time. This will increase the flow of knowledge among European researchers and back to their home institutions."

Detroit auto show models -- the human ones -- embrace their changing role in the #MeToo era

The Japan Times

DETROIT - Every year at the Detroit auto show, good-looking women -- and men -- are deployed by the carmakers to present their new vehicles. But with the shock wave created by the #MeToo movement still reverberating across the U.S., there are fewer auto show models of the human variety -- and they are not just pretty faces. The "product specialists" still have picture-perfect smiles, but they also can tick off the features of each car and prices with such assurance that the iPads they carry for reference can seem merely decorative. Auto companies are also making sure their fleet of specialists are ethnically and physically diverse. Perched on stilettos, Priscilla Tejeda is working for Toyota.

18 Cognitive Bias Examples Show Why Mental Mistakes Get Made


Out of the 188 cognitive biases that exist, there is a much narrower group of biases that has a disproportionately large effect on the ways we do business. These are things that affect workplace culture, budget estimates, deal outcomes, and our perceived return on investments within the company. Mental mistakes such as these can add up quickly, and can hamper any organization in reaching its full bottom line potential. Today's infographic from Raconteur aptly highlights 18 different cognitive bias examples that can create particularly difficult challenges for company decision-making. Financial biases These are imprecise mental shortcuts we make with numbers, such as hyperbolic discounting – the mistake of preferring a smaller, sooner payoff instead of a larger, later reward.

12 Blind Spots in AI Research – Intuition Machine – Medium


Humans by their nature have many cognitive biases. This can become detrimental to real scientific progress. Research tends to be bias in favor of approaches that many experts have invested countless years of study. The consequence of this is that we ignore many intrinsic characteristics found in the very system under study. Thus researchers can unfortunately consume a lifetime pursuing a wrong and pointless path. History is littered with research that in hindsight were discovered to be incorrect and therefore worthless.