Kuipers, Benjamin


Object Manipulation Learning by Imitation

arXiv.org Artificial Intelligence

We aim to enable robot to learn object manipulation by imitation. Given external observations of demonstrations on object manipulations, we believe that two underlying problems to address in learning by imitation is 1) segment a given demonstration into skills that can be individually learned and reused, and 2) formulate the correct RL (Reinforcement Learning) problem that only considers the relevant aspects of each skill so that the policy for each skill can be effectively learned. Previous works made certain progress in this direction, but none has taken private information into account. The public information is the information that is available in the external observations of demonstration, and the private information is the information that are only available to the agent that executes the actions, such as tactile sensations. Our contribution is that we provide a method for the robot to automatically segment the demonstration of object manipulations into multiple skills, and formulate the correct RL problem for each skill, and automatically decide whether the private information is an important aspect of each skill based on interaction with the world. Our experiment shows that our robot learns to pick up a block, and stack it onto another block by imitating an observed demonstration. The evaluation is based on 1) whether the demonstration is reasonably segmented, 2) whether the correct RL problems are formulated, 3) and whether a good policy is learned.



Remembering Marvin Minsky

AI Magazine

Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.


Remembering Marvin Minsky

AI Magazine

Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. He was 88. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.


Human-Like Morality and Ethics for Robots

AAAI Conferences

Humans need morality and ethics to get along constructively as members of the same society. As we face the prospect of robots taking a larger role in society, we need to consider how they, too, should behave toward other members of society. To the extent that robots will be able to act as agents in their own right, as opposed to being simply tools controlled by humans, they will need to behave according to some moral and ethical principles. Inspired by recent research on the cognitive science of human morality, we propose the outlines of an architecture for morality and ethics in robots. As in humans, there is a rapid intuitive response to the current situation. Reasoned reflection takes place at a slower time-scale, and is focused more on constructing a justification than on revising the reaction. However, there is a yet slower process of social interaction, in which both the example of action and its justification influence the moral intuitions of others. The signals an agent provides to others, and the signals received from others, help each agent determine which others are suitable cooperative partners, and which are likely to defect. This moral architecture is illustrated by several examples, including identifying research results that will be necessary for the architecture to be implemented.


Toward Morality and Ethics for Robots

AAAI Conferences

Humans need morality and ethics to get along constructively as members of the same society. As we face the prospect of robots taking a larger role in society, we need to consider how they, too, should behave toward other members of society. To the extent that robots will be able to act as agents in their own right, as opposed to being simply tools controlled by humans, they will need to behave according to some moral and ethical principles. Inspired by recent research on the cognitive science of human morality, we take steps toward an architecture for morality and ethics in robots. As in humans, there is a rapid intuitive response to the current situation. Reasoned reflection takes place at a slower time-scale, and is focused more on constructing a justification than on revising the reaction. However, there is a yet slower process of social interaction, in which examples of moral judgments and their justifications influence the moral development both of individuals and of the society as a whole. This moral architecture is illustrated by several examples, including identifying research results that will be necessary for the architecture to be implemented.


Preface

AAAI Conferences

A human-level artificially intelligent agent must be able to represent and reason about the world, at some level, in terms of high-level concepts such as entities and relations. The problem of acquiring these rich high-level representations, known as the knowledge acquisition bottleneck, has long been an obstacle for achieving human-level AI. A popular approach to this problem is to handcraft these high-level representations, but this has had limited success. An alternate approach is for rich representations to be learned autonomously from low-level sensor data. Potentially, the latter approach may yield more robust representations, and should rely less on human knowledge-engineering. The papers in this workshop present work and strategies in this latter approach.


An existing, ecologically-successful genus of collectively intelligent artificial creatures

arXiv.org Artificial Intelligence

People sometimes worry about the Singularity [Vinge, 1993; Kurzweil, 2005], or about the world being taken over by artificially intelligent robots. I believe the risks of these are very small. However, few people recognize that we already share our world with artificial creatures that participate as intelligent agents in our society: corporations. Our planet is inhabited by two distinct kinds of intelligent beings --- individual humans and corporate entities --- whose natures and interests are intimately linked. To co-exist well, we need to find ways to define the rights and responsibilities of both individual humans and corporate entities, and to find ways to ensure that corporate entities behave as responsible members of society.


Toward Bootstrap Learning of the Foundations of Commonsense Knowledge

AAAI Conferences

Our goal is for an autonomous learning agent to acquire the knowledge that serves as the foundations of common sense from its own experience without outside guidance. This requires the agent to (1) learn the structure of its own sensors and effectors; (2) learn a model of space around itself; (3) learn to move effectively in that space; (4) identify and describe objects, as distinct from the static environment; (5) learn and represent actions for affecting those objects, including preconditions and postconditions, and so on. We will provide examples of progress we have made, and the roadmap we envision for future research.


Sensor Map Discovery for Developing Robots

AAAI Conferences

Modern mobile robots navigate uncertain environments using complex compositions of camera, laser, and sonar sensor data. Manual calibration of these sensors is a tedious process that involves determining sensor behavior, geometry and location through model specification and system identification. Instead, we seek to automate the construction of sensor model geometry by mining uninterpreted sensor streams for regularities. Manifold learning methods are powerful techniques for deriving sensor structure from streams of sensor data. In recent years, the proliferation of manifold learning algorithms has led to a variety of choices for autonomously generating models of sensor geometry. We present a series of comparisons between different manifold learning methods for discovering sensor geometry for the specific case of a mobile robot with a variety of sensors. We also explore the effect of control laws and sensor boundary size on the efficacy of manifold learning approaches. We find that "motor babbling" control laws generate better geometric sensor maps than mid-line or wall following control laws and identify a novel method for distinguishing boundary sensor elements. We also present a new learning method, sensorimotor embedding, that takes advantage of the controllable nature of robots to build sensor maps.