Collaborating Authors

Do We Need Protection From AI Or AI From Us?


Artificial Intelligence has come a long way from science fiction to providing innovative, and life-changing solutions in the real world. AI is a form of intelligence exhibited by machines, where machine learning methods teach machines how to perform tasks that humans either can't do or at which machines are more efficient and productive. AI never stands still and it has become a subject of ethical debates, including how to exploit AI but not harm the humanity but also how to treat an artificial intellect in terms of rights and freedoms? Machines we see today are already capable of performing full-time both industrial and non-industrial jobs, they can speak, learn, and even have sexual relationships with humans. These factors lead to questions of whether the time has come to institutionalize robots and give them rights and freedoms, because at the moment, the creation of conscious entity is not prevented by laws or physics.

Google's robots teach themselves to do things and it's terrifying


When it comes to robots replacing humans, we might think we have the upper hand since we're the ones who build and program them but that's not neccesarily the case anymore. Google is taking a different approach to training its robots – it's letting them teach each other. Researchers at Google have released a report showing how they connected 14 robotic arms together and used convolutional neural networks to let them teach themselves how to pick things up. The approach mimics how young children learn between the ages of one and four years old, and is essentially helping the robots to develop reliable hand-eye coordination. Typically, a robot would be programmed to carry out specific tasks, but this method shows how they can learn through trial-and-error in combination with a neural network – the same way a child learns how to do something by watching other people.

Using Artificial Intelligence to Humanize Management and Set Information Free - Reid Hoffman


This essay originally appeared on MIT Sloan Management Review as part of their Frontiers Essay Series. Each essay is a response to this question: "Within the next five years, how will technology change the practice of management in a way we have not yet witnessed?" Artificial Intelligence is about to transform management from an art into a combination of art and science. Not because we'll be taking commands from science fiction's robot overlords, but because specialized AI will allow us to apply data science to our human interactions at work in a way that earlier theorists like Peter Drucker could only imagine. We've already seen the power of specialized AI in the form of IBM's Watson, which trounced the best human players at Jeopardy!, and Google DeepMind's AlphaGo, which recently defeated one of the world's top Go players, Lee Sedol, four games to one.

AI Partnership Launched by Facebook, Google, Amazon, Microsoft, and IBM


Five tech giants announced on Wednesday that they are launching a nonprofit to "advance public understanding" of artificial intelligence and to formulate "best practices on the challenges and opportunities within the field." The Partnership on Artificial Intelligence to Benefit People and Society is being formed byAmazon, Facebook, Google, IBM, and Microsoft, each of which will have a representative on the group's 10-member board. The partnership will conduct research and recommend best practices relating to "ethics, fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology," according to the announcement. "It does not intend to lobby government or other policymaking bodies." "We're in a golden age of machine learning and AI," said Ralf Herbrich, the director of machine learning at Amazon, in a prepared statement.

Controversial AI has been trained to kill humans in a Doom deathmatch


A competition pitting artificial intelligence (AI) against human players in the classic video game Doom has demonstrated just how advanced AI learning techniques have become – but it's also caused considerable controversy. While several teams submitted AI agents for the deathmatch, two students in the US have caught most of the flak, after they published a paper online detailing how their AI bot learned to kill human players in deathmatch scenarios. The computer science students, Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University, used deep learning techniques to train their AI bot – nicknamed Arnold – to navigate the 3D environment of the first-person shooter Doom. By effectively playing the game over and over again, Arnold became an expert in fragging its Doom opponents – whether they were other artificial combatants, or avatars representing human players. While researchers have previously used deep learning to train AIs to master 2D video games and board games, the research shows that the techniques now also extend to 3D virtual environments.