Super-strong robots re on the horizon and they could'threaten our existence'. Speaking in a new documentary, experts say machines should be classed as an'invasive species' due to their rapid evolution and increased ability to consciously make decisions. And worryingly, the concerns are not for the long-term, with the experts predicting that the robot-takeover could begin'in the next few years.' The experts' main concern lies around robots developing their own unique understanding of the world, and even developing a consciousness. Owen Holland, Professor of cognitive robotics at the University of Sussex added: 'The whole of our society, our law, our education, is based around consciousness, making conscious decisions, and if we show that actually that's quite trivial and we reproduce it in an afternoon in a lab, then it could make you think "Well, how important is human life?
Scientists Alan Winfield, professor of robot ethics at the University of the West of England in Bristol, and Marina Jirotka, professor of human-centered computing at Oxford University, believe robots should be fitted with an "ethical black box." This would be the ethics equivalent of the aviation safety measure of the same name, designed to track a pilot's decisions and enable investigators to follow those actions in the event of accidents. As robots leave the controlled settings of factories and laboratories to interact more with humans, safety measures of this nature will become increasingly important.
Isaac Asimov gave us the basic rules of good robot behaviour: don't harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots. The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider. Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented "the first step towards embedding ethical values into robotics and AI".
Alas, it seems we've got a few more years before the robots take over. A security robot created by the company Knightscope was patrolling an office complex in Washington D.C. when it rolled into a fountain and met its untimely demise on Monday. The incident went viral on Twitter after Bilal Farooqui, an employee at the Washington Harbour complex, tweeted a photo of the 300-pound android, writing: 'We were promised flying cars, instead we got suicidal robots.' A security robot created by the company Knightscope was patrolling an office complex in Washington D.C. when it rolled into a fountain and met its untimely demise The K5 robot, which measures about five-feet tall, is billed as an'autonomous presence' that is rented out to malls, office buildings, and parking lots to enforce order The K5 robot, which measures about five-feet tall, is rented out to malls, office buildings, and parking lots to enforce order with a built-in video camera, license plate recognition, and thermal imaging. Billed as an'autonomous presence' that can'guide [itself] through even the most complex environments', the robot spins around and whistles.
Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.