Sensor-to-Symbol Reasoning for Embedded Intelligence

AAAI Conferences

Sensor-to-symbol conversion lies at the heart of all embedded intelligent systems. The everyday world occupied by human stakeholders is dominated by objects that have symbolic labels. For an embedded intelligent system to operate in such a world it must also be able to segment its sensory stream into objects and label those objects appropriately. It is our position that development of a consistent and flexible sensor-to-symbol reasoning system (or architecture) is a key component of embedded intelligence.


On the Role of Learning in Anchoring (position paper) Silvia Coradeschi Alessandro Saffiotti

AAAI Conferences

Center for Applied Autonomous Sensor Systems Orebro University, S-701 82 ()rebro, Sweden Abstract A situated agent may include a symbolic subsystem that uses symbols to denote objects in the physical world. Anchoring is the problem of connecting these symbols to the perceptual representations of the same objects. Learning is an important, and in some cases essential means to acquire the basic ingredients needed to perform anchoring. In this note, we discuss some issues on the role of learning in anchoring. The focus of this note is the connection between symbol-and sensor-level representations of objects in autonomous robotic systems embedded in a physical environment.


Anchoring Symbols to Sensor Data: Preliminary Report

AAAI Conferences

Anchoring is the process of creating and maintaining the correspondence between symbols and percepts that refer to the same physical objects. Although this process must necessarily be present in any physically embedded system that includes a symbolic component (e.g., an autonomous robot), no systematic study of anchoring as a problem per se has been reported in the literature on intelligent systems. In this paper, we propose a domain-independent definition of the anchoring problem, and identify its three basic functionalities: find, reacquire, and track. We illustrate our definition on two systems operating in two different domains: an unmanned airborne vehicle for traffic surveillance; and a mobile robot for office navigation.


Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations

arXiv.org Artificial Intelligence

Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot's world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.


An Integrated Robotic System for Spatial Understanding and Situated Interaction in Indoor Environments

AAAI Conferences

A major challenge in robotics and artificial intelligence lies in creating robots that are to cooperate with people in human-populated environments, e.g. for domestic assistance or elderly care. Such robots need skills that allow them to interact with the world and the humans living and working therein. In this paper we investigate the question of spatial understanding of human-made environments. The functionalities of our system comprise perception of the world, natural language, learning, and reasoning. For this purpose we integrate state-of-the-art components from different disciplines in AI, robotics and cognitive systems into a mobile robot system. The work focuses on the description of the principles we used for the integration, including cross-modal integration, ontology-based mediation, and multiple levels of abstraction of perception. Finally, we present experiments with the integrated "CoSy Explorer"