If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
It has been suggested that early human word learning occurs across learning situations and is bootstrapped by syntactic regularities such as word order. Simulation results from ideal learners and models assuming prior access to structured syn-tactic and semantic representations suggest that it is possible to jointly acquire word order and meanings and that learning is improved as each language capability bootstraps the other.We first present a probabilistic framework for early syntactic bootstrapping in the absence of advanced structured representations, then we use our framework to study the utility of joint acquisition of word order and word referent and its onset, in a memory-limited incremental model. Comparing learning results in the presence and absence of joint acquisition of word order in different ambiguous contexts, improvement in word order results showed an immediate onset, starting in early trials while being affected by context ambiguity. Improvement in word learning results on the other hand, was hindered in early trials where the acquired word order was imperfect,while being facilitated by word order learning in future trials as the acquired word order improved. Furthermore, our results showed that joint acquisition of word order and word referent facilitates one-shot learning of new words as well as inferring intentions of the speaker in ambiguous contexts.
Artificial agents will need to be aware of human moral and social norms, and able to use them in decision-making. In particular, artificial agents will need a principled approach to managing conflicting norms, which are common in human social interactions. Existing logic-based approaches suffer from normative explosion and are typically designed for deterministic environments; reward-based approaches lack principled ways of determining which normative alternatives exist in a given environment. We propose a hybrid approach, using Linear Temporal Logic (LTL) representations in Markov Decision Processes (MDPs), that manages norm conflicts in a systematic manner while accommodating domain stochasticity. We provide a proof-of-concept implementation in a simulated vacuum cleaning domain.
The ability to refer to entities such as objects, locations, and people is an important capability for robots designed to interact with humans. For example, a referring expression (RE) such as “Do you mean the box on the left?” might be used by a robot seeking to disambiguate between objects. In this paper, we present and evaluate algorithms for Referring Expression Generation (REG) in small-scale situated contexts. We first present data regarding how humans generate small-scale spatial referring expressions (REs). We then use this data to define five categories of observed small-scale spatial REs, and use these categories to create an ensemble of REG algorithms. Next, we evaluate REs generated by those algorithms and by humans both subjectively (by having participants rank REs), and objectively, (by assessing task performance when participants use REs) through a set of interrelated crowdsourced experiments. While our machine generated REs were subjectively rated lower than those generated by humans, they objectively significantly outperformed human REs. Finally, we discuss the main contributions of this work: (1) a dataset of images and REs, (2) a categorization of observed small-scale spatial REs, (3) an ensemble of REG algorithms, and (4) a crowdsourcing-based framework for subjectively and objectively evaluating REG.
Machine learning's advances have led to new ideas about the feasibility and importance of machine ethics keeping pace, with increasing emphasis on safety, containment, and alignment. This paper addresses a recent suggestion that inverse reinforcement learning (IRL) could be a means to so-called "value alignment.'' We critically consider how such an approach can engage the social, norm-infused nature of ethical action and outline several features of ethical appraisal that go beyond simple models of behavior, including unavoidably temporal dimensions of norms and counterfactuals. We propose that a hybrid approach for computational architectures still offers the most promising avenue for machines acting in an ethical fashion.
A major challenge for robots interacting with humans in realistic environments is handling robots' uncertainty with respect to the identities and properties of the people, places, and things found in their environments: a problem compounded when humans refer to these entities using underspecified language. In this paper, we present a framework for generating clarification requests in the face of both pragmatic and referential ambiguity, and show how we are able to handle several stages of this framework by integrating a Dempster-Shafer (DS)-theoretic pragmatic reasoning component with a probabilistic reference resolution component.
One of the hallmarks of humans as social agents is the ability to adjust their language to the norms of the particular situational context. When necessary, they can be terse, direct, and task-oriented, and in other situations they can be more indirect and polite. For future robots to truly earn the label “social,” it is necessary to develop mechanisms to enable robots with NL capabilities to adjust their language in similar ways. In this paper, we highlight the various dimensions involved in this challenge, and discuss how socially-sensitive natural-language generation can be implemented in a cognitive, robotic architecture.
We present a domain-independent approach to reference resolution that allows a robotic or virtual agent to resolve references to entities (e.g., objects and locations) found in open worlds when the information needed to resolve such references is distributed among multiple heterogeneous knowledge bases in its architecture. An agent using this approach can combine information from multiple sources without the computational bottleneck associated with centralized knowledge bases. The proposed approach also facilitates “lazy constraint evaluation”, i.e., verifying properties of the referent through different modalities only when the information is needed. After specifying the interfaces by which a reference resolution algorithm can request information from distributed knowledge bases, we present an algorithm for performing open-world reference resolution within that framework, analyze the algorithm’s performance, and demonstrate its behavior on a simulated robot.
The concept of "affordance" represents the relationship between human perceivers and their environment. Affordance perception, representation, and inference are central to commonsense reasoning, tool-use and creative problem-solving in artificial agents. Existing approaches fail to provide flexibility with which to reason about affordances in the open world, where they are influenced by changing context, social norms, historical precedence, and uncertainty. We develop a formal rules-based logical representational format coupled with an uncertainty-processing framework to reason about cognitive affordances in a more general manner than shown in the existing literature. Our framework allows agents to make deductive and abductive inferences about functional and social affordances, collectively and dynamically, thereby allowing the agent to adapt to changing conditions. We demonstrate our approach with an example, and show that an agent can successfully reason through situations that involve a tight interplay between various social and functional norms.
Much existing work examining the ethical behaviors of robots does not consider the impact and effects of long- term human-robot interactions. A robot teammate, col- laborator or helper is often expected to increase task performance, individually or of the team, but little dis- cussion is usually devoted to how such a robot should balance the task requirements with building and main- taining a “working relationship” with a human partner, much less appropriate social relations outside that team. We propose the “Relational Enhancement” framework for the design and evaluation of long-term interactions, which composed of interrelated concepts of efficiency, solidarity, and prosocial concern. We discuss how this framework can be used to evaluate common existing ap- proaches in cognitive architectures for robots and then examine how social norms and mental simulation may contribute to each of the components of the framework.