Kuipers, Benjamin
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
Juett, Jonathan, Kuipers, Benjamin
The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.
Object Manipulation Learning by Imitation
Zeng, Zhen, Kuipers, Benjamin
We aim to enable robot to learn object manipulation by imitation. Given external observations of demonstrations on object manipulations, we believe that two underlying problems to address in learning by imitation is 1) segment a given demonstration into skills that can be individually learned and reused, and 2) formulate the correct RL (Reinforcement Learning) problem that only considers the relevant aspects of each skill so that the policy for each skill can be effectively learned. Previous works made certain progress in this direction, but none has taken private information into account. The public information is the information that is available in the external observations of demonstration, and the private information is the information that are only available to the agent that executes the actions, such as tactile sensations. Our contribution is that we provide a method for the robot to automatically segment the demonstration of object manipulations into multiple skills, and formulate the correct RL problem for each skill, and automatically decide whether the private information is an important aspect of each skill based on interaction with the world. Our experiment shows that our robot learns to pick up a block, and stack it onto another block by imitating an observed demonstration. The evaluation is based on 1) whether the demonstration is reasonably segmented, 2) whether the correct RL problems are formulated, 3) and whether a good policy is learned.
Ethical Considerations in Artificial Intelligence Courses
Burton, Emanuelle (University of Kentucky) | Goldsmith, Judy (University of Kentucky) | Koenig, Sven (University of Southern California) | Kuipers, Benjamin (University of Michigan) | Mattei, Nicholas (IBM Research) | Walsh, Toby (University of New South Wales and Data61)
Ethical Considerations in Artificial Intelligence Courses
Burton, Emanuelle (University of Kentucky) | Goldsmith, Judy (University of Kentucky) | Koenig, Sven (University of Southern California) | Kuipers, Benjamin (University of Michigan) | Mattei, Nicholas (IBM Research) | Walsh, Toby (University of New South Wales and Data61)
The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.
Shakey: From Conception to History
Kuipers, Benjamin (University of Michigan) | Feigenbaum, Edward A. (Stanford University) | Hart, Peter E. (Ricoh Innovations) | Nilsson, Nils J. (Stanford University)
hakey the Robot, conceived fifty years ago, was a seminal contribution to AI. Shakey perceived its world, planned how to achieve a goal, and acted to carry out that plan. This was revolutionary. At the Twenty-Ninth AAAI Conference on Artificial Intelligence, attendees gathered to celebrate Shakey, and to gain insights into how the AI revolution moves ahead. The celebration included a panel, chaired by Benjamin Kuipers and featuring AI pioneers Ed Feigenbaum, Peter Hart, and Nils Nilsson. This article includes written versions of the contributions of those panelists.
Remembering Marvin Minsky
Forbus, Kenneth D. (Northwestern University) | Kuipers, Benjamin (University of Michigan) | Lieberman, Henry (Massachusetts Institute of Technology)
Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.
Remembering Marvin Minsky
Forbus, Kenneth D. (Northwestern University) | Kuipers, Benjamin (University of Michigan) | Lieberman, Henry (Massachusetts Institute of Technology)
Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. He was 88. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.
Human-Like Morality and Ethics for Robots
Kuipers, Benjamin (University of Michigan)
Humans need morality and ethics to get along constructively as members of the same society. As we face the prospect of robots taking a larger role in society, we need to consider how they, too, should behave toward other members of society. To the extent that robots will be able to act as agents in their own right, as opposed to being simply tools controlled by humans, they will need to behave according to some moral and ethical principles. Inspired by recent research on the cognitive science of human morality, we propose the outlines of an architecture for morality and ethics in robots. As in humans, there is a rapid intuitive response to the current situation. Reasoned reflection takes place at a slower time-scale, and is focused more on constructing a justification than on revising the reaction. However, there is a yet slower process of social interaction, in which both the example of action and its justification influence the moral intuitions of others. The signals an agent provides to others, and the signals received from others, help each agent determine which others are suitable cooperative partners, and which are likely to defect. This moral architecture is illustrated by several examples, including identifying research results that will be necessary for the architecture to be implemented.
Toward Morality and Ethics for Robots
Kuipers, Benjamin (University of Michigan)
Humans need morality and ethics to get along constructively as members of the same society. As we face the prospect of robots taking a larger role in society, we need to consider how they, too, should behave toward other members of society. To the extent that robots will be able to act as agents in their own right, as opposed to being simply tools controlled by humans, they will need to behave according to some moral and ethical principles. Inspired by recent research on the cognitive science of human morality, we take steps toward an architecture for morality and ethics in robots. As in humans, there is a rapid intuitive response to the current situation. Reasoned reflection takes place at a slower time-scale, and is focused more on constructing a justification than on revising the reaction. However, there is a yet slower process of social interaction, in which examples of moral judgments and their justifications influence the moral development both of individuals and of the society as a whole. This moral architecture is illustrated by several examples, including identifying research results that will be necessary for the architecture to be implemented.
An existing, ecologically-successful genus of collectively intelligent artificial creatures
Kuipers, Benjamin
People sometimes worry about the Singularity [Vinge, 1993; Kurzweil, 2005], or about the world being taken over by artificially intelligent robots. I believe the risks of these are very small. However, few people recognize that we already share our world with artificial creatures that participate as intelligent agents in our society: corporations. Our planet is inhabited by two distinct kinds of intelligent beings --- individual humans and corporate entities --- whose natures and interests are intimately linked. To co-exist well, we need to find ways to define the rights and responsibilities of both individual humans and corporate entities, and to find ways to ensure that corporate entities behave as responsible members of society.