DeepMind


Google's DeepMind creates an AI with 'imagination'

#artificialintelligence

Google's DeepMind is developing an AI capable of'imagination', enabling machines to see the consequences of their actions before they make them. Its attempt to create algorithms that simulate the distinctly human ability to construct a plan could eventually help to produce software and hardware capable of solving complex tasks more efficiently. A video shows an AI agent playing Sokoban, without knowing the rules of the game. "This is initial research, but as AI systems become more sophisticated and are required to operate in more complex environments, this ability to imagine could enable our systems to learn the rules governing their environment and thus solve tasks more efficiently," the researchers told WIRED.


Creative Robots? Google's DeepMind Artificial Intelligence Is Getting Imagination

International Business Times

Artificial intelligence is gaining momentum and is being increasingly used across industries, but more importantly, the lines between artificial and human intelligence are getting blurred. Google is blurring them even further by endowing artificial intelligence with imagination. Google owned AI lab and DeepMind is working on endowing AI with imagination which would open vast possibilities for the technology -- AI would be able to reason through decisions, make plans for the future, and even dream. It is developing a skill set to navigate complex situations and increase adaptability.


Vicarious gets another $50 million to expand its research team and build smarter robots

#artificialintelligence

David Phoenix and Dileep George co-founded Vicarious back in 2010. Phoenix previously founded Frogmetrics, a customer feedback platform, while George founded Numenta, another R&D heavy AI startup. Vicarious is searching for the holy grail of artificial intelligence -- generalized intelligence. Supposedly, by maintaining an abstract conceptual understanding of cause and effect relationship within the task at hand, Vicarious' Schema Networks can run effectively in different environments without retraining.


Artificial intelligence is not as smart as you (or Elon Musk) think

#artificialintelligence

It represented one of those defining technological moments like IBM's Deep Blue beating chess champion Garry Kasparov, or even IBM Watson beating the world's greatest Jeopardy! Former MIT robotics professor Rodney Brooks, who was one of the founders of iRobot and later Rethink Robotics, reminded us at the TechCrunch Robotics Session at MIT last week that training an algorithm to play a difficult strategy game isn't intelligence, at least as we think about it with humans. Gil Pratt, CEO at the Toyota Institute, a group inside Toyota working on artificial intelligence projects including household robots and autonomous cars, was interviewed at the TechCrunch Robotics Session, said that the fear we are hearing about from a wide range of people, including Elon Musk, who most recently called AI "an existential threat to humanity," could stem from science-fiction dystopian descriptions of artificial intelligence run amok. Physicist Stephen Hawking and philosopher Nick Bostrom also have expressed reservations about the potential impact of AI on humankind -- but chances are they are talking about a more generalized artificial intelligence being studied in labs at the likes of Facebook AI Research, DeepMind and Maluuba, rather than the more narrow AI we are seeing today.


Google's DeepMind create AI with an 'imagination'

Daily Mail

Both games require forward planning and reasoning, making them the perfect environment to test agents' abilities. DeepMind researchers trained a number of simulated bodies, including a headless'walker,' a four-legged'ant,' and a 3D humanoid, to learn more complex behaviours as they carry out different locomotion tasks. The results, while comical, show how these systems can learn to improve their own techniques as they interact with the different environments, eventually allowing them to run, jump, crouch and turn as needed. The DeepMind researchers trained a number of simulated bodies, including a headless'walker,' a four-legged'ant,' (pictured) and a 3D humanoid, to learn more complex behaviours as they carry out different locomotion tasks The approach relies on a reinforcement learning algorithm, developed using components from several recent deep learning systems.


Google's DeepMind made an AI that can imagine the future

#artificialintelligence

Recently, DeepMind's founder Demis Hassabis wrote a paper published in Neuron about how the development of general-purpose AI is dependent on understanding and encoding human abilities like imagination, curiosity, and memory into AI. The I2A'agents' in the papers were tasked with different situations to test their predictive abilities, "including the puzzle game Sokoban and a spaceship navigation game." When the researchers added a'manager' component that helped create a plan, it "learns to solve tasks even more efficiently with fewer steps." As Hassabis wrote in the Neuron paper, creating agents with an imagination that can rival what we can do "is perhaps the hardest challenge for AI research: to build an agent that can plan hierarchically, is truly creative, and can generate solutions to challenges that currently elude even the human mind."


DeepMind researchers create AI with an 'imagination'

Engadget

To construct and evaluate future plans, the I2As "imagine" actions and outcomes in sequence before deciding which plan to execute. A third option allows the I2As to create an "imagination tree," which lets the agent choose to continue imagining from any imaginary situation created since the last action it took. For both tasks, the I2As performed better than agents without future reasoning abilities, were able to learn with less experience and were able to handle imperfect environments. When it comes to planning ability and future reasoning, there's still a lot of work to be done, but this first look is a promising step towards imaginative AI.


Artificial intelligence is not as smart as you (or Elon Musk) think

#artificialintelligence

It represented one of those defining technological moments not unlike IBM's Deep Blue beating chess champion Garry Kasparov, or even IBM Watson beating the world's greatest Jeopardy champions in 2011. Former MIT robotics professor Rodney Brooks, who was one of the founders of iRobot and later Rethink Robotics, reminded us at the TechCrunch Robotics Session at MIT last week that training an algorithm to play a difficult strategy game isn't intelligence, at least as we think about it with humans. Gil Pratt, CEO at the Toyota Institute, a group inside Toyota working on artificial intelligence projects including household robots and autonomous cars, was interviewed at the TechCrunch Robotics Session, said that the fear we are hearing about from a wide range of people, including Elon Musk, who most recently called AI "an existential threat to humanity," could stem from science-fiction dystopian descriptions of artificial intelligence run amok. Physicist Stephen Hawking and philosopher Nick Bostrom also have expressed reservations about the potential impact of AI on humankind -- but chances are they are talking about a more generalized artificial intelligence being studied in labs at the likes of Facebook AI Research, DeepMind and Maluuba, rather than the more narrow AI we are seeing today.


Should we be worried about AI?

#artificialintelligence

However, development of an artificial general intelligence, or AGI, opens up potential risks. There are fundamental differences between today's AIs and AGIs, the primary difference being based on how computers and humans operate. Perhaps the world's most famous scientist, Stephen Hawking, expressed similar concerns, stating that artificial intelligence could "spell the end of the human race." One positioned is shared by many in the computer science field: AGI systems that could pose a threat to mankind are so far from being developed they're not worth worrying about.


Career of the Future: Robot Psychologist

Wall Street Journal

One subset that has taken off is neural networks, systems that "learn" as humans do through training, turning experience into networks of simulated neurons. "A big problem is people treat AI or machine learning as being very neutral," said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. "And a lot of that is people not understanding that it's humans who design these models and humans who choose the data they are trained on." It is a difficult enough problem to crack that the Defense Advanced Research Projects Agency, better known as Darpa, is funding researchers working on "explainable artificial intelligence." Here's why we're in this pickle: A good way to solve problems in computer science is for engineers to code a neural network--essentially a primitive brain--and train it by feeding it enormous piles of data.