There is an indisputable link between Victor Frankenstein's creation (let's try and veer away from the term monster), and Artificial Intelligence. Mary Wollstonecraft Shelley's narrative of the modern Prometheus has travelled through time and space, surpassing generations. For me, the classic tale of Frankenstein and his creation is timeless - in the true sense of the word. It cannot be bolted down. Bore from growing scientific circles of the Victorian era and the mind of an intellectually advanced teenage girl, it boasts post-modern sensibilities and futuristic ideals.
The scientists and engineers spearheading the creation of artificial beings and bionic people are responding to the magnetism of the technological imperative, the pull of a scientific problem as challenging as any imaginable. Fascinating scientific puzzle though it is, the creation of artificial beings is also expected to meet important needs for society and individuals. Industrial robots are already widely used in factories and on assembly lines. Robots for hazardous duty, from dealing with terrorist threats to exploring hostile environments, including distant planets, are in place or on the drawing boards. Such duty could include military postings because there is a longstanding interest in self-guided battlefield mechanisms that reduce the exposure of human soldiers, and in artificially enhanced soldiers with increased combat effectiveness.
The singularity – or, to give it its proper title, the technological singularity. It's an idea that has taken on a life of its own; more of a life, I suspect, than what it predicts ever will. It's a Thing for techno-utopians: wealthy middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some seemingly prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence – a man-made god that grants transcendence. Apocalypsarians who are equally convinced that a super-intelligent AI will have no interest in curing cancer or old age, or ending poverty, but will – malevolently or maybe just accidentally – bring about the end of human civilisation as we know it.
In a book written in 1964, God and Golem: Inc., Norbert Wiener predicted that the quest to construct computermodeled artificial intelligence (AI) would come to impinge directly upon some of our most widely and deeply held religious and ethical values. It is certainly true that the idea of mind as artifact, the idea of a humanly constructed artificial intelligence, forces us to confront our image of ourselves. In the theistic tradition of Judeo-Christian culture, a tradition that is, to a large extent: our "fate," we were created in the Such is the scenario envisaged by some of the classic science fiction of the past, Shelley's Frankenstein, or the Modern Prometheus and the Capek brothers' R. U.R. (for Rossom's Universal Robots) being notable examples. Both seminal works share the view that Pamela McCorduck (1979) in her work Machines Who Think calls the "Hebraic" attitude toward the AI enterprise. In contrast to what she calls the "Hellenic" fascination with, and openness toward, AI, the Hebraic attitude has been one of fear and warning: "You shall not make for yourself a graven image..." I don't think that the basic outline of Franl%enstein needs to be recapitulated here, even if, The possibility of constructing a personal AI raises many ethical the fear that we might succeed, perhaps it is the fear that we might create a Frankenstein, or perhaps it is the fear that we might become eclipsed, in a strange Oedipal drama, by our own creation.