If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Fast mapping is a phenomenon by which children learn the meanings of novel adjectives after a very small number of exposures when the new word is contrasted with a known word. The present study was a preliminary test of whether machine learners could use such contrasts in unconstrained speech to learn adjective meanings and categories. Six decision tree-based learning methods were evaluated that use contrasting examples in order to work toward an adjective fast-mapping system for machine learners. Subjects tended to compare objects using adjectives of the same category, implying that such contrasts may be a useful source of data about adjective meaning, though none of the learning algorithms showed strong advantages over any other.
Inconsistency in behaviors of virtual agents and robots, like that between utterance contents, utterance forms, and postures, has a possibility of influences into human impression, cognition, and memory, and as a result, may lead to inhibition of dialogues between humans and these artifacts. In order to discuss about this possibility and its implications on dialogue design, this paper introduces some case studies using simple animated characters and a small-sized humanoid robot in Japan.
There are two common “bad habits” in the description, analysis, interpretation and teaching of fundamental aspects of quantum mechanics. One is the all too casual use of the term “information”, without making it explicitly clear which of the various types, Shannon, algorithmic or pragmatic, are meant. The other concerns the use of the term “knowledge”, without alluding to specific aspects of human brain function, for instance, when the observer selects the “system under study”, formulates simplified models, asks theoretical questions, plans an experiment, decides what to measure, prepares the system, chooses initial conditions, anticipates to obtain certain results and confirms a final state. I will show how an objective definition of information and recent results about information-processing in the brain help overcome the most common counter-intuitive aspects of quantum mechanics. In particular, I will discuss entanglement, teleportation, non-interaction measurements and decoherence in the light of the fact that the concept of pragmatic information, the one our brain handles, can only be defined in the macroscopic domain. Counter-intuitive aspects arise when we construct mental images of quantum systems in which the concept of pragmatic information is illegitimately forced into the quantum domain.
Narratives structure our understanding of the world and of ourselves. They exploit the shared cognitive structures of human motivations, goals, actions, events, and outcomes. We report on a computational model that is motivated by results in neural computation and captures fine-grained, context sensitive information about human goals, processes, actions, policies, and outcomes. We describe the use of the model in the context of a pilot system that is able to interpret simple stories and narrative fragments in the domain of international politics and economics. We identify problems with the pilot system and outline extensions required to incorporate several crucial dimensions of narrative structure.
“Top-bottom” (MSP) technique of Complex Adaptive Systems (CAS) Modelling (Pushnoi 2003, 2004a, 2004b; Pushnoi and Bonser 2008) is applied for the exploration of macroscopic properties of the economic systems. MSP-Model of Economic CAS is considered according to which two global feedbacks determine dynamics of Economic CAS at utmost abstract level. Positive feedback determines the change of the temporary equilibrium state of the system whereas the negative feedback stabilizes one. The interplay of these feedbacks engenders very complex macroscopic dynamics with catastrophic jumps and discontinuous cycles.
When general purpose software agents fail, it's often because they're brittle and need more background commonsense knowledge. In this paper we present relation properties as a valuable type of commonsense knowledge that can be automatically inferred at scale by reading the Web. People base many commonsense inferences on their knowledge of relation properties such as functionality, transitivity, and others. For example, all people know that bornIn(Year) satisfies the functionality property, meaning that each person can be born in exactly one year. Thus inferences like "Obama was born in 1961, so he was not born in 2008", which computers do not know, are obvious even to children. We demonstrate scalable heuristics for learning relation functionality from noisy Web text that outperform existing approaches to detecting functionality. The heuristics we use address Web NLP challenges that are also common to learning other relation properties, and can be easily transferred. Each relation property we learn for a Web-scale set of relations will enable computers to solve real tasks, and the data from learning many such properties will be a useful addition to general commonsense knowledge bases.
A seminal study conducted by Greene, Bolick, and Robertson (2010) showed that learners do not always engage in appropriate metacognitive and self-regulatory processes while learning about history. However, little research exists to guide the design of technology-rich learning environments (TRLEs) as metacognitive tools in social sciences education. In order to address this issue, we designed a metacognitive tool using a bottom-up approach (Poitras, 2010; Poitras, Lajoie, & Hong, in prep). Thirty-two undergraduate students read an historical narrative text either with or without the benefit of the metacognitive tool. Results from process and product data suggest that learners had better recall because the metacognitive tool assisted learners to (a) notice that particular events are unexplained in the circumstances described in an historical narrative text, and (b) generate hypothetical causes to explain the occurrence of such events. We discuss the implications of these findings for the development of the MetaHistoReasoning Tool, a TRLE that assists learners’ historical reasoning while they accomplish authentic tasks of historical inquiry.
In this article we describe a cognitive heuristic known as the unpacking effect by using a mathematical model, based on the quantum formalism, already introduced for the conjunction fallacy. We present the basic postulates of such quantum-like model and we show that the presence of interference terms is responsible of the unpacking effect. In particular, the sign of the interference and its functional form are able to describe the experimental results about subadditivity, superadditivity and additivity. A comparison with previous models is presented, as well as new experimental predictions, allowing to conclude that this new formalism and the basic concepts of quantum information processing provide a new promising way to describe and understand human judgement and categorization.
This paper describes results from a large-scale, complex human study using non-facial and non-verbal affect for victim management in robot-assisted Urban Search and Rescue Applications. Statistically significant results are presented that indicate participants felt emotive robots were more calming, friendlier, and attentive.
A large body of research describes the importance of adaptability for systems to be resilient in the face of disruptions. However, adaptive processes can be fallible, either because systems fail to adapt in situations requiring new ways of functioning, or because the adaptations themselves produce undesired consequences. A central question is then: how can systems better manage their capacity to adapt to perturbations, and constitute intelligent adaptive systems? Based on studies conducted in different high-risk domains (healthcare, mission control, military operations, urban firefighting), we have identified three basic patterns of adaptive failures or traps: (1) decompensation – when a system exhausts its capacity to adapt as disturbances and challenges cascade; (2) working at cross-purposes – when sub-systems or roles exhibit behaviors that are locally adaptive but globally maladaptive; (3) getting stuck in outdated behaviors – when a system over-relies on past successes although conditions of operation change. The identification of such basic patterns then suggests ways in which a work organization, as an example of a complex adaptive system, needs to behave in order to see and avoid or recognize and escape the corresponding failures. The paper will present how expert practitioners exhibit such resilient behaviors in high-risk situations, and how adverse events can occur when systems fail to do so. We will also explore how various efforts in research related to complex adaptive systems provide fruitful directions to advance both the necessary theoretical work and the development of concrete solutions for improving systems’ resilience.