Collaborating Authors

Simulation of Human Behavior

Artificial Intelligence: The End of Cognitive Biases


This post was originally featured on the 30SecondsToFly blog in 2017. Ready Player One, Ernest Cline's first novel (soon to be adapted to film by Steven Spielberg), portrays a futuristic dystopian society that consumed all the energetic resources on earth and spends most of its time emerged in a Virtual Reality platform, the Oasis. Besides the brilliant references and tributes to the '80s presented in the book, the author also makes some propositions about the role Artificial Intelligence will play in our lives in a (not so) distant future. Instead of going to the physical stores to complain about a product or a service, the customers only have to put on their VR headsets and introduce their requests and complaints to a virtual assistant (who is operated anywhere else in the world by another human being). In one of the book's chapters, an interaction between a customer and an IT assistant takes place inside of the Oasis.

Top 3 Factors That Fuel Human Cognitive Bias


Smart technology is already all around us. Whether we're fully aware of it or not, it tracks our digital footprint every step of the way. Whether it's location tracking, personalized ads, or even keyboard word suggestions, there's always an algorithm standing behind it. We experience its influence in our lives often without even realizing it's there. We feed the machines based on our knowledge, experience, and perceptions – and there's nothing wrong with that. But as humans, we're also naturally loaded with cognitive imperfections, influencing our daily lives without us even noticing it.

New generation of virtual humans helping to train psychologists

AITopics Original Links

"As this technology continues to improve, it will have a significant impact on how clinical training is conducted in psychology and medicine," said psychologist and virtual reality technology expert Albert "Skip" Rizzo, PhD, who demonstrated recent advancements in virtual reality for use in psychology. Virtual humans can now be highly interactive, artificially intelligent and capable of carrying on a conversation with real humans, according to Rizzo, a research scientist at the University of Southern California Institute for Creative Technologies. "This has set the stage for the'birth' of intelligent virtual humans to be used in clinical training settings," he said. Rizzo showed videos of clinical psychiatry trainees engaging with virtual patients called "Justin" and "Justina." Justin is a 16-year-old with a conduct disorder who is being forced by his family to participate in therapy.

Decomposed Inductive Procedure Learning Artificial Intelligence

Recent advances in machine learning have made it possible to train artificially intelligent agents that perform with super-human accuracy on a great diversity of complex tasks. However, the process of training these capabilities often necessitates millions of annotated examples -- far more than humans typically need in order to achieve a passing level of mastery on similar tasks. Thus, while contemporary methods in machine learning can produce agents that exhibit super-human performance, their rate of learning per opportunity in many domains is decidedly lower than human-learning. In this work we formalize a theory of Decomposed Inductive Procedure Learning (DIPL) that outlines how different forms of inductive symbolic learning can be used in combination to build agents that learn educationally relevant tasks such as mathematical, and scientific procedures, at a rate similar to human learners. We motivate the construction of this theory along Marr's concepts of the computational, algorithmic, and implementation levels of cognitive modeling, and outline at the computational-level six learning capacities that must be achieved to accurately model human learning. We demonstrate that agents built along the DIPL theory are amenable to satisfying these capacities, and demonstrate, both empirically and theoretically, that DIPL enables the creation of agents that exhibit human-like learning performance.

Scenario-based simulation: Combining HD maps and real-world traffic data - atlatec


If you work in the ADAS/Autonomous Vehicles field, you are probably familiar with HD maps – virtual recreations of real-world roads including their 3D profile, driving rules, inter-connectivity of lanes etc. A lot of these HD maps go into the simulation domain, where car makers and suppliers leverage them to train new ADAS/AV systems or for verification/validation of features from those domains. The reason to use HD maps of real-world roads (rather than just generic, fictional routes created from scratch) is simple: In the end, you want your system to perform in the real world – so you want to optimize for real-world conditions as early as possible, starting in simulation. As we all know, the real world is nothing if not random, and you will encounter many situations you would rarely find in generic data sets. So far, so good: These HD maps can be used to properly train lane-keep assistance or lane-departure warning systems, validate speed limit sign detection and many other systems. However, a map only contains the static features of an environment – what about ADAS/AV features that are supposed to react to other traffic participants?

Council Post: How To Help Tame Cognitive Bias In Your AI System


Daniel Fallmann is Founder and CEO of Mindbreeze, a leader in enterprise search, applied artificial intelligence and knowledge management. Over the years, AI has been able to furnish a host of solutions to many of our everyday challenges. Voice assistants like Alexa and Siri, for example, are now reasonably good at interpreting human speech correctly. They're already providing precise, targeted information in many instances. That said, implementing AI systems has become a real game-changer not only for private use but also in the corporate environment.

Every Single Cognitive Bias in One Infographic


The human brain is capable of incredible things, but it's also extremely flawed at times. Science has shown that we tend to make all sorts of mental mistakes, called "cognitive biases", that can affect both our thinking and actions. These biases can lead to us extrapolating information from the wrong sources, seeking to confirm existing beliefs, or failing to remember events the way they actually happened! To be sure, this is all part of being human--but such cognitive biases can also have a profound effect on our endeavors, investments, and life in general. For this reason, today's infographic from is particularly handy.

DASH: Modularized Human Manipulation Simulation with Vision and Language for Embodied AI Artificial Intelligence

Creating virtual humans with embodied, human-like perceptual and actuation constraints has the promise to provide an integrated simulation platform for many scientific and engineering applications. We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment solely using its own visual perception, proprioception, and touch, without requiring human motion data. By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic Figure 1: Our system, dynamic and autonomous simulated constraints and with fluid and diverse motions. The modular design human (DASH), is an embodied virtual human modeled off also favors analysis and extensibility to more complex manipulation of a child. DASH is able to manipulate tabletop objects with a skills.

How Cognitive Bias In AI Impacts Business Outcomes - AI Summary


For instance, specific data that a neural network might not be able to process, such as the reasoning behind the results of an insurance claim -- might not have a straightforward representation in machine learning because of possible interpretations. This issue of overfitting is a typical problem of AI, and a variety of use cases, and data might bring up additional challenges that the human brain can handle and adapt to more easily and creatively. For example, if there are exceptions to the rules in issues of fraud detection in the financial industry, both experts and customers alike would want to know all of the elements that led to the AI's decision and require some transparency regarding the outcome. Few things are more frustrating for business owners than a missed target or a misplaced investment, but cognitive biases can hinder intelligent decisions and cost every year. But if your business faces a sudden uncertainty, a proclivity for deep thinking, over-analyzing, and compensating for lower performance through shortcuts doesn't help.

How Cognitive Bias in AI Impacts Business Outcomes


With billions of dollars at stake, decision-makers need to set boundaries and parameters for AI to avoid any downsides to technology usage. It is critical to know how to avoid common mistakes with neural networks to feel confident about your solution stack. AI processes information differently, and it's essential to understand how each works before applying it in business. For instance, specific data that a neural network might not be able to process, such as the reasoning behind the results of an insurance claim -- might not have a straightforward representation in machine learning because of possible interpretations. In this situation, the output of a neural network might not have quality results.