A Question of Responsibility

Waldrop, M. Mitchell

AI Magazine 

In 1940, a 20-year-old science fiction fan from Brooklyn found that he was growing tired of stories that endlessly repeated the myths of Frankenstein and Faust: Robots were created and destroyed their creator; robots were created and destroyed their creator; robots were created and destroyed their creator-ad nauseum. So he began writing robot stories of his own. "[They were] robot stories of a new variety," he recalls. "Never, never was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their'brains' from the moment of construction. " In particular, he imagined that each robot's artificial brain would be imprinted with three engineering safeguards, three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law. The young writer's name, of course, was Isaac Asimov (1964), and the robot stories he began writing that year have become classics of science fiction, the standards by which others are judged. Indeed, because of Asimov one almost never reads about robots turning mindlessly on their masters anymore. But the legends of Frankenstein and Faust are subtle ones, and as the world knows too well, engineering rationality is not always the same thing as wisdom. M Mitchell Waldrop is a reporter for Science Magazine, 1333 H Street N.W., Washington D C. 2COO5. Reprinted by permission of the publisher.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found