Artificial Intelligence has come a long way from science fiction to providing innovative, and life-changing solutions in the real world. AI is a form of intelligence exhibited by machines, where machine learning methods teach machines how to perform tasks that humans either can't do or at which machines are more efficient and productive. AI never stands still and it has become a subject of ethical debates, including how to exploit AI but not harm the humanity but also how to treat an artificial intellect in terms of rights and freedoms? Machines we see today are already capable of performing full-time both industrial and non-industrial jobs, they can speak, learn, and even have sexual relationships with humans. These factors lead to questions of whether the time has come to institutionalize robots and give them rights and freedoms, because at the moment, the creation of conscious entity is not prevented by laws or physics.
This essay originally appeared on MIT Sloan Management Review as part of their Frontiers Essay Series. Each essay is a response to this question: "Within the next five years, how will technology change the practice of management in a way we have not yet witnessed?" Artificial Intelligence is about to transform management from an art into a combination of art and science. Not because we'll be taking commands from science fiction's robot overlords, but because specialized AI will allow us to apply data science to our human interactions at work in a way that earlier theorists like Peter Drucker could only imagine. We've already seen the power of specialized AI in the form of IBM's Watson, which trounced the best human players at Jeopardy!, and Google DeepMind's AlphaGo, which recently defeated one of the world's top Go players, Lee Sedol, four games to one.
A competition pitting artificial intelligence (AI) against human players in the classic video game Doom has demonstrated just how advanced AI learning techniques have become – but it's also caused considerable controversy. While several teams submitted AI agents for the deathmatch, two students in the US have caught most of the flak, after they published a paper online detailing how their AI bot learned to kill human players in deathmatch scenarios. The computer science students, Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University, used deep learning techniques to train their AI bot – nicknamed Arnold – to navigate the 3D environment of the first-person shooter Doom. By effectively playing the game over and over again, Arnold became an expert in fragging its Doom opponents – whether they were other artificial combatants, or avatars representing human players. While researchers have previously used deep learning to train AIs to master 2D video games and board games, the research shows that the techniques now also extend to 3D virtual environments.
If you've read anything about technology in the news recently you might be inclined to think that Artificial Intelligence (AI) is poised to take over the world, leaving us mere mortals to find new ways to earn a crust. Indeed, the plethora of new services, even turning AI into its own service (AIaaS – AI as a Service), will start to redefine the way we communicate, work, and experience the world. Google's CEO Sundar Pichai has been bold enough to say "In the long run, we're evolving in computing from a'mobile-first' to an'AI-first' world".1The There's a lot that needs to happen before AI can automate entire professions, and we're seeing research that shows that jobs requiring human interaction (thankfully HR is one of these) will be the hardest to replace.2 As with any broad scale changes that affect people's livelihoods and the economies that we rely on for stability, views about the future impacts are many and varied.
A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from the rise of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.