MOSCOW – Russia on Thursday launched an unmanned rocket carrying a life-size humanoid robot that will spend 10 days learning to assist astronauts on the International Space Station. Named Fedor, for Final Experimental Demonstration Object Research with identification number Skybot F850, the robot is the first ever sent up by Russia. Fedor blasted off in a Soyuz MS-14 spacecraft at 6:38 a.m. The Soyuz is set to dock with the space station on Saturday and stay till Sept. 7. Soyuz ships are normally manned on such trips, but on Thursday no humans are traveling in order to test a new emergency rescue system. Instead of cosmonauts, Fedor was strapped into a specially adapted pilot's seat, with a small Russian flag in his hand.
Sorry, science fiction fans, but the "replicants" of the Blade Runner saga or the "terminators" of the eponymous action movie franchise are not on the horizon. "Don't imagine human-like, humanoid robots when you think of the future of robots," said Kim Sang-bae, the world-renowned robot scientist who developed a four-legged walking robot called "Cheetah," which has gained widespread media exposure. Not only is it impossible to develop human-like robots now, it may remain impossible in the future, according to Kim, a mechanical engineering professor at the Massachusetts Institute of Technology. While the ultimate stage of robotics may, indeed, be creating machines that can think and work on their own, there is a yawning gap between where robot technologies stand now and that final-stage development. In an interview with Asia Times, Kim predicted that the robot industry would continue to expand by creating robots which can do very specific things better than humans.
When DataGrid, Inc. announced it successfully developed an AI system capable of generating high-quality photorealistic Japanese faces, it was impressive. But now the company has gone even further. Its artificial intelligence (AI) system can now create not only faces and hair from a variety of ethnicities, but bodies that can move and wear any outfit. While these images are fictitious, they are incredibly photorealistic. AI Can Now Create Artificial People – What Does That Mean Humans?
The Pentagon is set to award a $10bn "war cloud" contract to a technology company next month, with both Amazon and Microsoft competing for the chance to build a military-grade AI computing system. The Joint Enterprise Defence Infrastructure (Jedi) plan faces a number of obstacles before the US defence department makes its decision next month, not least from within the companies' own work forces. Microsoft employees published an open letter on Medium last year, pleading the tech giant to not bid on the Jedi contract. "Many Microsoft employees don't believe that what we build should be used for waging war," they wrote. We'll tell you what's true.
Technological advancements in AI are leading to the development of a true human robot, but this may not be something we need, writes Paul Budde. RECENTLY, I WENT to a lecture organised by the University of Sydney titled'Why should the perfect robot look and think just like a human?' I was intrigued and perhaps even a bit dismayed about this title as I strongly believe that this is not the best direction for robotics. Furthermore, such a new human species will most likely never be developed, certainly not within the next few generations. Beyond that, humanity might perhaps arrive at a stage that we have gathered sufficient intellect and wisdom to develop robots within the restrictions of what we as a society see fit.
If watching the Terminator left you cowering behind the sofa, you are not alone. When it comes to cyborgs, robots, and AI powered digital helpers the creepiest ones are those which look most like us. A study about artificial intelligence has found'humanoid' robots like CP30 in Star Wars are really quite likeable. But if they look too much like people, robots tend to be disliked and mistrusted - perhaps because of a fear they could replace us in a dystopian future. This phenomenon is known as the'uncanny valley' - the unsettling feeling we get from robots and digital agents that are human-like but still somehow different.
Google CEO Sundar Pichai gave a surprising interview recently. Asked by CNN's Poppy Harlow about a Brookings report predicting that 80 million American jobs would be lost by 2030 because of artificial intelligence, Pichai said, "It's a bit tough to predict how all of this will play out." Pichai seemed to say the future is uncertain, so there's no sense in solving problems that may not occur. He added that Google could deal with any disruption caused by its technology development by "slowing down the pace," as if Google could manage disruption merely by pacing itself -- and that no disruption was imminent. The term "artificial intelligence" often prompts this kind of hand waving.
Professor Mutlu discusses design-thinking at a high-level, how design relates to science, and he speaks about the main areas of his work: the design space, the evaluation space, and how features are used within a context. He also gives advice on how to apply a design-oriented mindset. Bilge Mutlu is an Associate Professor of Computer Science, Psychology, and Industrial Engineering at the University of Wisconsin–Madison. He directs the Wisconsin HCI Laboratory and organizes the WHCI D Group. He received his PhD degree from Carnegie Mellon University's Human-Computer Interaction Institute.
One of the biggest challenges in robotics systems is interacting under uncertainty. Unlike robots, humans learn, adapt and perceive their body as a unity when interacting with the world. We hypothesize that the nervous system counteracts sensor and motor uncertainties by unconscious processes that robustly fuse the available information for approximating their body and the world state. Being able to unite perception and action under a common principle has been sought for decades and active inference is one of the potential unification theories. In this work, we present a humanoid robot interacting with the world by means of a human brain-like inspired perception and control algorithm based on the free-energy principle. Until now, active inference was only tested in simulated examples. Their application on a real robot shows the advantages of such an algorithm for real world applications. The humanoid robot iCub was capable of performing robust reaching behaviors with both arms and active head object tracking in the visual field, despite the visual noise, the artificially introduced noise in the joint encoders (up to 40 degrees deviation), the differences between the model and the real robot and the misdetections of the hand.
OXFORD, ENGLAND - Wearing a white blouse and her dark hair hanging loose, Ai-Da looks like any artist at work as she studies her subject and puts pencil to paper. But the beeping from her bionic arm gives her away -- Ai-Da is a robot. Described as "the world's first ultra-realistic AI humanoid robot artist," Ai-Da opens her first solo exhibition of eight drawings, 20 paintings, four sculptures and two video works next week, bringing "a new voice" to the art world, her British inventor and gallery owner Aidan Meller said. "The technological voice is the important one to focus on because it affects everybody," he said at a preview. "We've got a very clear message we want to explore: the uses and abuses of AI today, because this next decade is coming in dramatically and we're concerned about that and we want to have ethical considerations in all of that."