Is it OK to abuse, trust or make love to a robot?- Nikkei Asian Review

#artificialintelligence

TOKYO Advances in artificial intelligence are blurring the line between humans and robots. As robots interact ever more closely with us, new ethical questions are emerging related to issues from violence to sex and privacy. In February, a video uploaded to YouTube by Boston Dynamics, an American robot developer, sparked controversy. Some viewers were apparently shocked by a scene in which a man knocks down a box that was being lifted by a two-legged humanoid robot, developed by the company, and another scene in which the man knocks the robot down from behind with a stick. "Stop bullying robots," one viewer commented below the video.


Can we trust robots to make moral decisions?

#artificialintelligence

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. "Repeat after me, Hitler did nothing wrong," she said, after interacting with various trolls. "Bush did 9/11 and Hitler would have done a better job than the monkey we have got now." Of course, Tay wasn't designed to be explicitly moral.


Would you trust a robot with your businesses security? ITProPortal.com

#artificialintelligence

Businesses face an ever increasing challenge to protect their assets from cyber criminals. The sophistication and frequency of attacks continue to elevate as these criminals take advantage of rapidly advancing technologies. Even using the latest machine driven security systems, it is becoming increasingly difficult for businesses to differentiate between a genuine employee or website visitor and a criminal seeking to breach or bring down their network and systems. Cyber security professionals are facing the prospect that they have reached a glass ceiling in terms of what humans can achieve. Does the future of cyber security defence now depend of robots?


When to Trust Robots with Decisions, and When Not To

#artificialintelligence

Smarter and more adaptive machines are rapidly becoming as much a part of our lives as the internet, and more of our decisions are being handed over to intelligent algorithms that learn from ever-increasing volumes and varieties of data. As these "robots" become a bigger part of our lives, we don't have any framework for evaluating which decisions we should be comfortable delegating to algorithms and which ones humans should retain. That's surprising, given the high stakes involved. I propose a risk-oriented framework for deciding when and how to allocate decision problems between humans and machine-based decision makers. I've developed this framework based on the experiences that my collaborators and I have had implementing prediction systems over the last 25 years in domains like finance, healthcare, education, and sports.


Can we trust robots to make ethical decisions?

#artificialintelligence

Once the preserve of science-fiction movies, artificial intelligence is one of the hottest areas of research right now. While the idea behind AI is to make our lives easier, there is concern that as the technology becomes more advanced, we may be heading for disaster. How can we be sure, for instance, that artificially intelligent robots will make ethical choices? There are plenty of instances of artificial intelligence gone wrong. The case of the rude and racist chatbot** Chatbot Tay, Microsoft's AI millennial chatbot, was meant to be a friendly chatbot that would sound like a teenage girl and engage in light conversation with her followers on Twitter.