"The construction of computer programs that simulate aspects of social behaviour can contribute to the understanding of social processes."
– Nigel Gilbert. Computational Social Science: Agent-based social simulationCentre for Research on Social Simulation, University of Surrey. Guildford, UK. 6 November 2005; revised and updated 20 May 2007.
There is a lot of confusion out there about what cognitive bias really is and how it relates to artificial intelligence. One of the most important things to keep in mind is that human and machine cognitive biases are quite different things. Humans and machines can both have biases, but those biases are not the same. While Applied-AI is still in its early days, it is already changing tons of business processes around us and will continue to do so. At a time when the value of data is never higher, many companies are investing in "artificial intelligence," in one or the other form, to help them transform business processes and make decisions faster and more accurate.
With AGI we really don't know how long the journey will be. So any step, although appears to make progress, we really don't know how many more steps will be necessary to reach the goal of AGI. As revealed by Moravec's paradox. A consequence is that we underestimate the complexity of the cognitive process required for seemingly simple behavior. We are largely unconscious of our own thought processes.
Daniel Fallmann is Founder and CEO of Mindbreeze, a leader in enterprise search, applied artificial intelligence and knowledge management. When faced with a challenge, human beings are generally quick to first try to develop creative solutions. We tend to pick the most logical explanation we can find, ignoring all contradictory or unprovable hypotheses in the process. However, this irrational pattern of thinking could eventually sabotage our efforts to create an actual intelligent machine. A cognitive bias known as rationalization is one such phenomenon that is tricky or even dangerous for AI.
You can see the faint stubble coming in on his upper lip, the wrinkles on his forehead, the blemishes on his skin. He isn't a real person, but he's meant to mimic one--as are the hundreds of thousands of others made by Datagen, a company that sells fake, simulated humans. These humans are not gaming avatars or animated characters for movies. They are synthetic data designed to feed the growing appetite of deep-learning algorithms. They will make it for you: how you want it, when you want--and relatively cheaply.
This paper explores character-driven story continuation, in which the story emerges through characters' first- and second-person narration as well as dialogue -- requiring models to select language that is consistent with a character's persona and their relationships with other characters while following and advancing the story. We hypothesize that a multi-task model that trains on character dialogue plus character relationship information improves transformer-based story continuation. To this end, we extend the Critical Role Dungeons and Dragons Dataset (Rameshkumar and Bailey, 2020) -- consisting of dialogue transcripts of people collaboratively telling a story while playing the role-playing game Dungeons and Dragons -- with automatically extracted relationships between each pair of interacting characters as well as their personas. A series of ablations lend evidence to our hypothesis, showing that our multi-task model using character relationships improves story continuation accuracy over strong baselines.
One of the most striking features of human cognition is the capacity to plan. Two aspects of human planning stand out: its efficiency, even in complex environments, and its flexibility, even in changing environments. Efficiency is especially impressive because directly computing an optimal plan is intractable, even for modestly complex tasks, and yet people successfully solve myriad everyday problems despite limited cognitive resources. Standard accounts in psychology, economics, and artificial intelligence have suggested this is because people have a mental representation of a task and then use heuristics to plan in that representation. However, this approach generally assumes that mental representations are fixed. Here, we propose that mental representations can be controlled and that this provides opportunities to adaptively simplify problems so they can be more easily reasoned about -- a process we refer to as construal. We construct a formal model of this process and, in a series of large, pre-registered behavioral experiments, show both that construal is subject to online cognitive control and that people form value-guided construals that optimally balance the complexity of a representation and its utility for planning and acting. These results demonstrate how strategically perceiving and conceiving problems facilitates the effective use of limited cognitive resources.
Have you used the customer service app with your bank the past year, or received an unexpected email with an offer you were actually interested in? Maybe it was a well-timed mortgage re-fi or even some savings at a favorite store. There's an enduring and unfortunate misperception that AI serves only to replace human workers and irritate human clients. But what if we don't have to be locked in a zero-sum game with the growing legion of digital intelligence? An increasing number of banks and insurers are finding that it's almost impossible to meet the rising customer expectations and needs that digital services and apps have unleashed.
It is virtually impossible to tease apart human capabilities from human cultural and other background knowledge, so this is necessary to provide an objective point of comparison against humans. Furthermore, a comprehensive understanding of human background knowledge, sufficient to not only recall but apply that knowledge, tests the cognitive capabilities essential to the human kind of understanding. I have recommended that human respondents be drawn from broad populations to ensure that this cultural knowledge is least-common-denominator rather than esoteric. The graders might be able to tell that they are scoring a non-human subject system. Difficulties with the Turing Test have demonstrated that this is probably not an issue. It is a relatively easy task to fool humans into thinking they are interacting with a human, even without human-level cognitive capabilities. Mimicking human interaction styles, though again not necessarily a goal of the subject system, should not be difficult for a system with cognition that is comparable to that of humans. Nevertheless, the reason the protocol attempts to disguise which respondents are human or non-human is not because this contributes to the evaluation, but merely to avoid implicit bias in scoring. All the test questions are raster images - does this mean the system has to do handwriting recognition?
Round-the-clock monitoring of human behavior and emotions is required in many healthcare applications which is very expensive but can be automated using machine learning (ML) and sensor technologies. Unfortunately, the lack of infrastructure for collection and sharing of such data is a bottleneck for ML research applied to healthcare. Our goal is to circumvent this bottleneck by simulating a human body in virtual environment. This will allow generation of potentially infinite amounts of shareable data from an individual as a function of his actions, interactions and emotions in a care facility or at home, with no risk of confidentiality breach or privacy invasion. In this paper, we develop for the first time a system consisting of computational models for synchronously synthesizing skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals as a function of an open-ended set of actions and emotions. Our experimental evaluations, involving user studies, benchmark datasets and comparison to findings in the literature, show that our models can generate skeletal motion and physiological signals with high fidelity. The proposed framework is modular and allows the flexibility to experiment with different models. In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data, and may be used as a training tool for ML practitioners and healthcare professionals.