"You are worse than a fool; you have no care for your species. For thousands of years men dreamed of pacts with demons. Only now are such things possible." When William Gibson wrote those words in his groundbreaking 1984, novel Neuromancer, artificial intelligence remained almost entirely within the realm of science fiction. Today, however, the convergence of complex algorithms, big data, and exponential increases in computational power has resulted in a world where AI raises significant ethical and human rights dilemmas, involving rights ranging from the right to privacy to due process.
Professor Stephen Hawking has warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity. It could herald dangers like powerful autonomous weapons and ways for the few to oppress the many, he said, as he called for more research in the area. But if sufficient research is done to avoid the risks, it could help in humanity's aims to'finally eradicate disease and poverty', he added. He was speaking in Cambridge at the launch of The Leverhulme Centre for the Future of Intelligence, which will explore the implications of the rapid development of artificial intelligence. All great achievements of civilisation, from learning to master fire to learning to grow food to understanding the cosmos, were down to human intelligence, he said.
Producers had told him that if he could design them a creature they wanted to feature in a script, they'd let him play the part--and now Prohaska asked series creator Gene Roddenberry, story editor Dorothy Fontana, and the writer Gene L. Coon to come outside. A few days later, Fontana says, they had the script to "The Devil in the Dark," which introduced the beloved fan-favorite alien Horta, played by Prohaska in his rubbery suit. Gene L. Coon was telling the kind of stories that Gene Roddenberry wanted to see, but he was telling them with more heart. One early Star Trek script showed Kirk killing an evolving life form--something Coon strongly objected to, according to Andreea Kindryd, then his production secretary.
While there doesn't appear to be any hard data on the subject, security experts and law enforcement officials said they couldn't recall another time when police deployed a robot with lethal intent. Meanwhile, militaries around the world have come to rely on their robotic friends to disable improvised explosive devices -- a need that only increased with the U.S. occupation of Iraq following its 2003 invasion. One robot developed by China's National Defense University called AnBot has been designed for "an important role in enhancing the country's anti-terrorism and anti-riot measures," according to its website. A 2014 report by Human Rights Watch and Harvard Law School's International Human Rights Clinic raised concerns about the use of fully autonomous weapons in law enforcement operations.