Goto

Collaborating Authors

Results


Decomposed Inductive Procedure Learning

arXiv.org Artificial Intelligence

Recent advances in machine learning have made it possible to train artificially intelligent agents that perform with super-human accuracy on a great diversity of complex tasks. However, the process of training these capabilities often necessitates millions of annotated examples -- far more than humans typically need in order to achieve a passing level of mastery on similar tasks. Thus, while contemporary methods in machine learning can produce agents that exhibit super-human performance, their rate of learning per opportunity in many domains is decidedly lower than human-learning. In this work we formalize a theory of Decomposed Inductive Procedure Learning (DIPL) that outlines how different forms of inductive symbolic learning can be used in combination to build agents that learn educationally relevant tasks such as mathematical, and scientific procedures, at a rate similar to human learners. We motivate the construction of this theory along Marr's concepts of the computational, algorithmic, and implementation levels of cognitive modeling, and outline at the computational-level six learning capacities that must be achieved to accurately model human learning. We demonstrate that agents built along the DIPL theory are amenable to satisfying these capacities, and demonstrate, both empirically and theoretically, that DIPL enables the creation of agents that exhibit human-like learning performance.


Council Post: How To Help Tame Cognitive Bias In Your AI System

#artificialintelligence

Daniel Fallmann is Founder and CEO of Mindbreeze, a leader in enterprise search, applied artificial intelligence and knowledge management. Over the years, AI has been able to furnish a host of solutions to many of our everyday challenges. Voice assistants like Alexa and Siri, for example, are now reasonably good at interpreting human speech correctly. They're already providing precise, targeted information in many instances. That said, implementing AI systems has become a real game-changer not only for private use but also in the corporate environment.


DASH: Modularized Human Manipulation Simulation with Vision and Language for Embodied AI

arXiv.org Artificial Intelligence

Creating virtual humans with embodied, human-like perceptual and actuation constraints has the promise to provide an integrated simulation platform for many scientific and engineering applications. We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment solely using its own visual perception, proprioception, and touch, without requiring human motion data. By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic Figure 1: Our system, dynamic and autonomous simulated constraints and with fluid and diverse motions. The modular design human (DASH), is an embodied virtual human modeled off also favors analysis and extensibility to more complex manipulation of a child. DASH is able to manipulate tabletop objects with a skills.


How Cognitive Bias In AI Impacts Business Outcomes - AI Summary

#artificialintelligence

For instance, specific data that a neural network might not be able to process, such as the reasoning behind the results of an insurance claim -- might not have a straightforward representation in machine learning because of possible interpretations. This issue of overfitting is a typical problem of AI, and a variety of use cases, and data might bring up additional challenges that the human brain can handle and adapt to more easily and creatively. For example, if there are exceptions to the rules in issues of fraud detection in the financial industry, both experts and customers alike would want to know all of the elements that led to the AI's decision and require some transparency regarding the outcome. Few things are more frustrating for business owners than a missed target or a misplaced investment, but cognitive biases can hinder intelligent decisions and cost every year. But if your business faces a sudden uncertainty, a proclivity for deep thinking, over-analyzing, and compensating for lower performance through shortcuts doesn't help.


How Cognitive Bias in AI Impacts Business Outcomes

#artificialintelligence

With billions of dollars at stake, decision-makers need to set boundaries and parameters for AI to avoid any downsides to technology usage. It is critical to know how to avoid common mistakes with neural networks to feel confident about your solution stack. AI processes information differently, and it's essential to understand how each works before applying it in business. For instance, specific data that a neural network might not be able to process, such as the reasoning behind the results of an insurance claim -- might not have a straightforward representation in machine learning because of possible interpretations. In this situation, the output of a neural network might not have quality results.


Council Post: Human Cognitive Bias And Its Role In AI

#artificialintelligence

Daniel Fallmann is Founder and CEO of Mindbreeze, a leader in enterprise search, applied artificial intelligence and knowledge management. When faced with a challenge, human beings are generally quick to first try to develop creative solutions. We tend to pick the most logical explanation we can find, ignoring all contradictory or unprovable hypotheses in the process. However, this irrational pattern of thinking could eventually sabotage our efforts to create an actual intelligent machine. A cognitive bias known as rationalization is one such phenomenon that is tricky or even dangerous for AI.


Telling Stories through Multi-User Dialogue by Modeling Character Relations

arXiv.org Artificial Intelligence

This paper explores character-driven story continuation, in which the story emerges through characters' first- and second-person narration as well as dialogue -- requiring models to select language that is consistent with a character's persona and their relationships with other characters while following and advancing the story. We hypothesize that a multi-task model that trains on character dialogue plus character relationship information improves transformer-based story continuation. To this end, we extend the Critical Role Dungeons and Dragons Dataset (Rameshkumar and Bailey, 2020) -- consisting of dialogue transcripts of people collaboratively telling a story while playing the role-playing game Dungeons and Dragons -- with automatically extracted relationships between each pair of interacting characters as well as their personas. A series of ablations lend evidence to our hypothesis, showing that our multi-task model using character relationships improves story continuation accuracy over strong baselines.


Control of mental representations in human planning

arXiv.org Artificial Intelligence

One of the most striking features of human cognition is the capacity to plan. Two aspects of human planning stand out: its efficiency, even in complex environments, and its flexibility, even in changing environments. Efficiency is especially impressive because directly computing an optimal plan is intractable, even for modestly complex tasks, and yet people successfully solve myriad everyday problems despite limited cognitive resources. Standard accounts in psychology, economics, and artificial intelligence have suggested this is because people have a mental representation of a task and then use heuristics to plan in that representation. However, this approach generally assumes that mental representations are fixed. Here, we propose that mental representations can be controlled and that this provides opportunities to adaptively simplify problems so they can be more easily reasoned about -- a process we refer to as construal. We construct a formal model of this process and, in a series of large, pre-registered behavioral experiments, show both that construal is subject to online cognitive control and that people form value-guided construals that optimally balance the complexity of a representation and its utility for planning and acting. These results demonstrate how strategically perceiving and conceiving problems facilitates the effective use of limited cognitive resources.


An Objective Laboratory Protocol for Evaluating Cognition of Non-Human Systems Against Human Cognition

arXiv.org Artificial Intelligence

It is virtually impossible to tease apart human capabilities from human cultural and other background knowledge, so this is necessary to provide an objective point of comparison against humans. Furthermore, a comprehensive understanding of human background knowledge, sufficient to not only recall but apply that knowledge, tests the cognitive capabilities essential to the human kind of understanding. I have recommended that human respondents be drawn from broad populations to ensure that this cultural knowledge is least-common-denominator rather than esoteric. The graders might be able to tell that they are scoring a non-human subject system. Difficulties with the Turing Test have demonstrated that this is probably not an issue. It is a relatively easy task to fool humans into thinking they are interacting with a human, even without human-level cognitive capabilities. Mimicking human interaction styles, though again not necessarily a goal of the subject system, should not be difficult for a system with cognition that is comparable to that of humans. Nevertheless, the reason the protocol attempts to disguise which respondents are human or non-human is not because this contributes to the evaluation, but merely to avoid implicit bias in scoring. All the test questions are raster images - does this mean the system has to do handwriting recognition?


Synthesizing Skeletal Motion and Physiological Signals as a Function of a Virtual Human's Actions and Emotions

arXiv.org Artificial Intelligence

Round-the-clock monitoring of human behavior and emotions is required in many healthcare applications which is very expensive but can be automated using machine learning (ML) and sensor technologies. Unfortunately, the lack of infrastructure for collection and sharing of such data is a bottleneck for ML research applied to healthcare. Our goal is to circumvent this bottleneck by simulating a human body in virtual environment. This will allow generation of potentially infinite amounts of shareable data from an individual as a function of his actions, interactions and emotions in a care facility or at home, with no risk of confidentiality breach or privacy invasion. In this paper, we develop for the first time a system consisting of computational models for synchronously synthesizing skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals as a function of an open-ended set of actions and emotions. Our experimental evaluations, involving user studies, benchmark datasets and comparison to findings in the literature, show that our models can generate skeletal motion and physiological signals with high fidelity. The proposed framework is modular and allows the flexibility to experiment with different models. In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data, and may be used as a training tool for ML practitioners and healthcare professionals.