Simulation of Human Behavior

Multi-scale Hyper-time Hardware Emulation of Human Motor Nervous System Based on Spiking Neurons using FPGA

Neural Information Processing Systems

Our central goal is to quantify the long-term progression of pediatric neurological diseases, such as a typical 10-15 years progression of child dystonia. To this purpose, quantitative models are convincing only if they can provide multi-scale details ranging from neuron spikes to limb biomechanics. The models also need to be evaluated in hyper-time, i.e. significantly faster than real-time, for producing useful predictions. We designed a platform with digital VLSI hardware for multi-scale hyper-time emulations of human motor nervous systems. The platform is constructed on a scalable, distributed array of Field Programmable Gate Array (FPGA) devices.

Deep Learning for Predicting Human Strategic Behavior

Neural Information Processing Systems

Predicting the behavior of human participants in strategic settings is an important problem in many domains. Most existing work either assumes that participants are perfectly rational, or attempts to directly model each participant's cognitive processes based on insights from cognitive psychology and experimental economics. In this work, we present an alternative, a deep learning approach that automatically performs cognitive modeling without relying on such expert knowledge. We introduce a novel architecture that allows a single network to generalize across different input and output dimensions by using matrix units rather than scalar units, and show that its performance significantly outperforms that of the previous state of the art, which relies on expert-constructed features. Papers published at the Neural Information Processing Systems Conference.

Has the Age of Virtual Humans Arrived?


Is your friendship circle ready for a virtual human? AI is getting better and better by the day, and as technological advancements are made it's becoming increasingly popular. Still, it's early days and we've yet to see the full potential of this exciting technology. However, Samsung recently debuted artificial humans called Neons, who scarily resemble the real thing. Has the age of virtual humans arrived?

50 Cognitive Biases in the Modern World


Cognitive biases are widely accepted as something that makes us human. Every day, systematic errors in our thought process impact the way we live and work. But in a world where everything we do is changing rapidly--from the way we store information to the way we watch TV--what really classifies as rational thinking? It's a question with no right or wrong answer, but to help us decide for ourselves, today's infographic from TitleMax lists 50 cognitive biases that we may want to become privy to. In the name of self-awareness, here's a closer look at three recently discovered biases that we are most prone to exhibiting in the modern world.

Samsung's Neon 'Artificial Humans' Look Like Super-Realistic Video Chatbots


At CES 2020, Samsung's STAR Labs research group unveiled Neon, a simulated human assistant, an animated "chatbot" that appears on a screen and learns about people in order to provide intelligent and life-like responses. These "artificial humans" will be able to give responses to questions in milliseconds. Companies and people will be able to license or subscribe to Neons, with the goal of enhancing customer service interactions. Said Samsung, "Over time, Neons will work as TV anchors, spokespeople, or movie actors; or they can simply be companions and friends." Samsung indicated that Neon will be beta launched with selected partners later this year.

Couger, Connectome and new Virtual Human Agent (VHA) technology appear on NHK Educational TV…


What is Human? is an educational entertainment program that explores the definition of being human by looking at the latest applications of AI and discussing trending AI-related topics. The program featuring Couger's Atsushi Ishii examined the intersection of AI, work and what it means to be human. One popular claim when it comes to work and AI comes from the University of Oxford's Professor Osborne. He has previously stated that within the next 10 to 20 years, about 47 percent of US jobs risk being replaced by automation. Osborne is frequently cited by media, leading to ubiquitous articles focusing on how AI will steal our jobs with titles like "human jobs will be snatched up by AI".

Effects of data ambiguity and cognitive biases on the interpretability of machine learning models in humanitarian decision making Machine Learning

The effectiveness of machine learning algorithms depends on the qua lity and amount of data and the operationalization and interpretation by the human analyst . In humanitarian response, data is often lacking or overburdening, thus ambiguous, and t he time - scarce, volatile, insecure environments of humanitarian activities are likely to inflict cognitive biases. This paper proposes to research the effects of data ambiguity and cognitive biases on the interpretability of machine learning algorithms in humanitarian decision making .

Bringing Augmented Reality to life with 'virtual humans' using Artificial Intelligence – the mission of Scanta WRAL TechWire


Editor's note: This is the latest installment in an Uptech series of video interviews and accompanying transcripts about the emerging development and uses of Artificial Intelligence along with Machine Learning, and WRAL TechWire are working together to publish this series. Alexander Ferguson is the founder and CEO of YourLocalStudio. Artificial intelligence, machine learning: These emerging technologies are changing the way we live, work, and do business in the world for the better. How is AI actually being applied in business today, though? In this episode of UpTech Report, I interview Chaitanya Hiremath, who also goes by Chad.

Door and Doorway Etiquette for Virtual Humans. - PubMed - NCBI


We introduce a framework for simulating a variety of nontrivial, socially motivated behaviors that underlie the orderly passage of pedestrians through doorways, especially the common courtesy of opening and holding doors open for others, an important etiquette that has been overlooked in the literature on autonomous multi-human animation. Emulating such social activity requires serious attention to the interplay of visual perception, navigation in constrained doorway environments, manipulation of a variety of door types, and high-level decision making based on social considerations. To tackle this complex human simulation problem, we take an artificial life approach to modeling autonomous pedestrians, proposing a layered architecture comprising mental, behavioral, and motor layers. The behavioral layer couples two stages: (1) a decentralized, agent-based strategy for dynamically determining the well-mannered ordering of pedestrians around doorways, and (2) a state-based model that directs and coordinates a pedestrian's interactions with the door. The mental layer is a Bayesian network decision model that dynamically selects appropriate door holding behaviors by considering both internal and external social factors pertinent to pedestrians interacting with one another in and around doorways.