"The construction of computer programs that simulate aspects of social behaviour can contribute to the understanding of social processes."
– Nigel Gilbert. Computational Social Science: Agent-based social simulationCentre for Research on Social Simulation, University of Surrey. Guildford, UK. 6 November 2005; revised and updated 20 May 2007.
Daniel Fallmann is Founder and CEO of Mindbreeze, a leader in enterprise search, applied artificial intelligence and knowledge management. Over the years, AI has been able to furnish a host of solutions to many of our everyday challenges. Voice assistants like Alexa and Siri, for example, are now reasonably good at interpreting human speech correctly. They're already providing precise, targeted information in many instances. That said, implementing AI systems has become a real game-changer not only for private use but also in the corporate environment.
The human brain is capable of incredible things, but it's also extremely flawed at times. Science has shown that we tend to make all sorts of mental mistakes, called "cognitive biases", that can affect both our thinking and actions. These biases can lead to us extrapolating information from the wrong sources, seeking to confirm existing beliefs, or failing to remember events the way they actually happened! To be sure, this is all part of being human--but such cognitive biases can also have a profound effect on our endeavors, investments, and life in general. For this reason, today's infographic from DesignHacks.co is particularly handy.
Creating virtual humans with embodied, human-like perceptual and actuation constraints has the promise to provide an integrated simulation platform for many scientific and engineering applications. We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment solely using its own visual perception, proprioception, and touch, without requiring human motion data. By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic Figure 1: Our system, dynamic and autonomous simulated constraints and with fluid and diverse motions. The modular design human (DASH), is an embodied virtual human modeled off also favors analysis and extensibility to more complex manipulation of a child. DASH is able to manipulate tabletop objects with a skills.
For instance, specific data that a neural network might not be able to process, such as the reasoning behind the results of an insurance claim -- might not have a straightforward representation in machine learning because of possible interpretations. This issue of overfitting is a typical problem of AI, and a variety of use cases, and data might bring up additional challenges that the human brain can handle and adapt to more easily and creatively. For example, if there are exceptions to the rules in issues of fraud detection in the financial industry, both experts and customers alike would want to know all of the elements that led to the AI's decision and require some transparency regarding the outcome. Few things are more frustrating for business owners than a missed target or a misplaced investment, but cognitive biases can hinder intelligent decisions and cost every year. But if your business faces a sudden uncertainty, a proclivity for deep thinking, over-analyzing, and compensating for lower performance through shortcuts doesn't help.
With billions of dollars at stake, decision-makers need to set boundaries and parameters for AI to avoid any downsides to technology usage. It is critical to know how to avoid common mistakes with neural networks to feel confident about your solution stack. AI processes information differently, and it's essential to understand how each works before applying it in business. For instance, specific data that a neural network might not be able to process, such as the reasoning behind the results of an insurance claim -- might not have a straightforward representation in machine learning because of possible interpretations. In this situation, the output of a neural network might not have quality results.
There is a lot of confusion out there about what cognitive bias really is and how it relates to artificial intelligence. One of the most important things to keep in mind is that human and machine cognitive biases are quite different things. Humans and machines can both have biases, but those biases are not the same. While Applied-AI is still in its early days, it is already changing tons of business processes around us and will continue to do so. At a time when the value of data is never higher, many companies are investing in "artificial intelligence," in one or the other form, to help them transform business processes and make decisions faster and more accurate.
State-of-the-art driver-assist systems have failed to effectively mitigate driver inattention and had minimal impacts on the ever-growing number of road mishaps (e.g. life loss, physical injuries due to accidents caused by various factors that lead to driver inattention). This is because traditional human-machine interaction settings are modeled in classical and behavioral game-theoretic domains which are technically appropriate to characterize strategic interaction between either two utility maximizing agents, or human decision makers. Therefore, in an attempt to improve the persuasive effectiveness of driver-assist systems, we develop a novel strategic and personalized driver-assist system which adapts to the driver's mental state and choice behavior. First, we propose a novel equilibrium notion in human-system interaction games, where the system maximizes its expected utility and human decisions can be characterized using any general decision model. Then we use this novel equilibrium notion to investigate the strategic driver-vehicle interaction game where the car presents a persuasive recommendation to steer the driver towards safer driving decisions. We assume that the driver employs an open-quantum system cognition model, which captures complex aspects of human decision making such as violations to classical law of total probability and incompatibility of certain mental representations of information. We present closed-form expressions for players' final responses to each other's strategies so that we can numerically compute both pure and mixed equilibria. Numerical results are presented to illustrate both kinds of equilibria.
With AGI we really don't know how long the journey will be. So any step, although appears to make progress, we really don't know how many more steps will be necessary to reach the goal of AGI. As revealed by Moravec's paradox. A consequence is that we underestimate the complexity of the cognitive process required for seemingly simple behavior. We are largely unconscious of our own thought processes.
Daniel Fallmann is Founder and CEO of Mindbreeze, a leader in enterprise search, applied artificial intelligence and knowledge management. When faced with a challenge, human beings are generally quick to first try to develop creative solutions. We tend to pick the most logical explanation we can find, ignoring all contradictory or unprovable hypotheses in the process. However, this irrational pattern of thinking could eventually sabotage our efforts to create an actual intelligent machine. A cognitive bias known as rationalization is one such phenomenon that is tricky or even dangerous for AI.