A team of Australian researchers has designed a reliable strategy for testing physical abilities of humanoid robots--robots that resemble the human body shape in their build and design. Using a blend of machine learning methods and algorithms, the research team succeeded in enabling test robots to effectively react to unknown changes in the simulated environment, improving their odds of functioning in the real world. The findings, which were published in a joint publication of the IEEE and the Chinese Association of Automation Journal of Automatica Sinica in July, have promising implications in the broad use of humanoid robots in fields such as healthcare, education, disaster response and entertainment. "Humanoid robots have the ability to move around in many ways and thereby imitate human motions to complete complex tasks. In order to be able to do that, their stability is essential, especially under dynamic and unpredictable conditions," said corresponding author Dacheng Tao, Professor and ARC Laureate Fellow in the School of Computer Science and the Faculty of Engineering at the University of Sydney.
Algorithms play role in so much of what we see online and in our day-to-day lives, helping out with everything from setting bail to finding recipes. But while the algorithms of the past were painstakingly coded by humans, the algorithms of the future will be built by robots. They'll be better, more efficient, but also nearly impossible for humans to understand.
The "Curly" curling robots are capturing hearts around the world. A product of Korea University in Seoul and the Berlin Institute of Technology, the deep reinforcement learning powered bots slide stones along ice in a winter sport that dates to the 16th century. As much as their human-expert-bettering accuracy or technology impresses, a big part of the Curly appeal is how we see the little machines in the physical space: the determined manner in which the thrower advances in the arena, smartly raising its head-like cameras to survey the shiny white curling sheet, gently cradling and rotating a rock to begin delivery, releasing deftly at the hog line as a skip watches from the backline, with our hopes. Artificial intelligence (AI) today delivers everything from soup recipes to stock predictions, but most tech works out-of-sight. More visible are the physical robots of various shapes, sizes and functions that embody the latest AI technologies. These robots have generally been helpful, and now they are also becoming a more entertaining and enjoyable part of our lives.
It's fair to say that our world has reached a point where technology is so advanced that robots are almost expected to be lifelike – but what about robots that develop mental illnesses, hallucinations and depression like human beings do? Is this just science fiction, or can we really expect artificial intelligence to grow even more similar to humans in the not-so-distant future? Back in March, New York University hosted a symposium in New York City called Canonical Computations in Brains and Machines, where a group of neuroscientists and experts in the field of artificial intelligence spoke about overlaps in the ways in which human beings and machines think and process information. According to one of these neuroscientists – Zachary Mainen of the Champalimaud Centre for the Unknown – we might expect advanced machines to soon be able to experience some of the same mental problems that people do. "I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning.