"The construction of computer programs that simulate aspects of social behaviour can contribute to the understanding of social processes."
– Nigel Gilbert. Computational Social Science: Agent-based social simulationCentre for Research on Social Simulation, University of Surrey. Guildford, UK. 6 November 2005; revised and updated 20 May 2007.
We introduce a framework for simulating a variety of nontrivial, socially motivated behaviors that underlie the orderly passage of pedestrians through doorways, especially the common courtesy of opening and holding doors open for others, an important etiquette that has been overlooked in the literature on autonomous multi-human animation. Emulating such social activity requires serious attention to the interplay of visual perception, navigation in constrained doorway environments, manipulation of a variety of door types, and high-level decision making based on social considerations. To tackle this complex human simulation problem, we take an artificial life approach to modeling autonomous pedestrians, proposing a layered architecture comprising mental, behavioral, and motor layers. The behavioral layer couples two stages: (1) a decentralized, agent-based strategy for dynamically determining the well-mannered ordering of pedestrians around doorways, and (2) a state-based model that directs and coordinates a pedestrian's interactions with the door. The mental layer is a Bayesian network decision model that dynamically selects appropriate door holding behaviors by considering both internal and external social factors pertinent to pedestrians interacting with one another in and around doorways.
We propose an alternative and unifying framework for decision-making that, by using quantum mechanics, provides more generalised cognitive and decision models with the ability to represent more information than classical models. This framework can accommodate and predict several cognitive biases reported in Lieder & Griffiths without heavy reliance on heuristics nor on assumptions of the computational resources of the mind. Expected utility theory and classical probabilities tell us what people should do if employing traditionally rational thought, but do not tell us what people do in reality (Machina, 2009). Under this principle, L&G propose an architecture for cognition that can serve as an intermediary layer between Neuroscience and Computation. Whilst instances where large expenditures of cognitive resources occur are theoretically alluded to, the model primarily assumes a preference for fast, heuristic-based processing.
Training employees how to perform specific tasks isn't difficult, but building their soft skills -- their interactions with management, fellow employees, and customers -- can be more challenging, particularly if there aren't people around to practice with. Virtual reality training company Talespin announced today that it is leveraging AI to tackle that challenge, using a new "virtual human platform" to create realistic simulations for employee training purposes. Unlike traditional employee training, which might consist of passively watching a video or lightly interacting with collections of canned multiple choice questions, Talespin's system has a trainee interact with a virtual human powered by AI, speech recognition, and natural language processing. Because the interactions use VR headsets and controllers, the hardware can track a trainee's gaze, body movement, and facial expressions during the session. Talespin's virtual character is able to converse realistically, guiding trainees through branching narratives using natural mannerisms and believable speech.
There is an interesting move underway to establish a pan-European AI research federation - a sort of decentralised CERN for AI. From their website: "CLAIRE is an initiative by the European AI community that seeks to strengthen European excellence in AI research and innovation. To achieve this, CLAIRE proposes the establishment of a pan-European Confederation of Laboratories for Artificial Intelligence Research in Europe that achieves "brand recognition" similar to CERN." "The CLAIRE initiative aims to establish a pan-European network of Centres of Excellence in AI, strategically located throughout Europe, and a new, central facility with state-of-the-art, "Google-scale", CERN-like infrastructure – the CLAIRE Hub – that will promote new and existing talent and provide a focal point for exchange and interaction of researchers at all stages of their careers, across all areas of AI. The CLAIRE Hub will not be an elitist AI institute with permanent scientific staff, but an environment where Europe's brightest minds in AI meet and work for limited periods of time. This will increase the flow of knowledge among European researchers and back to their home institutions."
DETROIT - Every year at the Detroit auto show, good-looking women -- and men -- are deployed by the carmakers to present their new vehicles. But with the shock wave created by the #MeToo movement still reverberating across the U.S., there are fewer auto show models of the human variety -- and they are not just pretty faces. The "product specialists" still have picture-perfect smiles, but they also can tick off the features of each car and prices with such assurance that the iPads they carry for reference can seem merely decorative. Auto companies are also making sure their fleet of specialists are ethnically and physically diverse. Perched on stilettos, Priscilla Tejeda is working for Toyota.
Out of the 188 cognitive biases that exist, there is a much narrower group of biases that has a disproportionately large effect on the ways we do business. These are things that affect workplace culture, budget estimates, deal outcomes, and our perceived return on investments within the company. Mental mistakes such as these can add up quickly, and can hamper any organization in reaching its full bottom line potential. Today's infographic from Raconteur aptly highlights 18 different cognitive bias examples that can create particularly difficult challenges for company decision-making. Financial biases These are imprecise mental shortcuts we make with numbers, such as hyperbolic discounting – the mistake of preferring a smaller, sooner payoff instead of a larger, later reward.
Humans by their nature have many cognitive biases. This can become detrimental to real scientific progress. Research tends to be bias in favor of approaches that many experts have invested countless years of study. The consequence of this is that we ignore many intrinsic characteristics found in the very system under study. Thus researchers can unfortunately consume a lifetime pursuing a wrong and pointless path. History is littered with research that in hindsight were discovered to be incorrect and therefore worthless.
Companies from a wide range of industries use machine learning data to do everyday business. From consumer marketing and workforce management to healthcare treatment decision solutions and public safety and policing solutions, whether you realize it or not your life is increasingly more affected by the outcomes of machine learning algorithms. Machine learning algorithms make decisions like who gets a bonus, a job interview, whether or not your credit card limit (or interest) is raised, and who gets into a clinical trial. Machine learning algorithms even help make decisions about who gets parole and who languishes in prison. The result is that people's lives and livelihood are affected by the decisions made by machines.
Most people, at this point, believe that climate change is a real thing that will harm future generations of humans. And yet, a cognitive dissonance exists around that knowledge and our sense of responsibility: A much smaller percentage of people believe that climate change is impacting them personally, according to Yale's climate survey program. It is indeed impacting humans right now, with clear and compelling evidence that the global average temperature is much higher than anything modern society has experienced. And that has lead us to a whole host of issues, some of which WIRED writer Adam Rogers discusses with the Gadget Lab team on this week's podcast. So what can we humans do to fix things – and how much of it can actually be fixed by personal actions, versus widespread policy?
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a ‘good’ explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics.