A large body of compelling evidence has been accumulated demonstrating that embodiment - the agent's physical setup, including its shape, materials, sensors and actuators - is constitutive for any form of cognition and as a consequence, models of cognition need to be embodied. In contrast to methods from empirical sciences to study cognition, robots can be freely manipulated and virtually all key variables of their embodiment and control programs can be systematically varied. As such, they provide an extremely powerful tool of investigation. We present a robotic bottom-up or developmental approach, focusing on three stages: (a) low-level behaviors like walking and reflexes, (b) learning regularities in sensorimotor spaces, and (c) human-like cognition. We also show that robotic based research is not only a productive path to deepening our understanding of cognition, but that robots can strongly benefit from human-like cognition in order to become more autonomous, robust, resilient, and safe.
Robots can now learn how to make decisions and control themselves, generalizing learned behaviors to unseen scenarios. In particular, AI powered robots show promise in rough environments like the lunar surface, due to the environmental uncertainties. We address this critical generalization aspect for robot locomotion in rough terrain through a training algorithm we have created called the Path Planning and Motion Control (PPMC) Training Algorithm. This algorithm is coupled with any generic reinforcement learning algorithm to teach robots how to respond to user commands and to travel to designated locations on a single neural network. In this paper, we show that the algorithm works independent of the robot structure, demonstrating that it works on a wheeled rover in addition the past results on a quadruped walking robot. Further, we take several big steps towards real world practicality by introducing a rough highly uneven terrain. Critically, we show through experiments that the robot learns to generalize to new rough terrain maps, retaining a 100% success rate. To the best of our knowledge, this is the first paper to introduce a generic training algorithm teaching generalized PPMC in rough environments to any robot, with just the use of reinforcement learning.
It was before 10 a.m. on a gray summer Sunday, but already a small crowd had gathered outside Penguin Café at the end of a block in residential Tokyo. A woman named Kyoko, dressed in a white T-shirt and apron, unlocked the doors and motioned for everyone to come inside. Half a dozen or so people filed in, several with signature pink dog carriers slung over their shoulders. As more entered, the group clustered at the center of the café. Carefully, they unzipped the mesh panels of their carriers and removed the small white and silver dogs inside, setting them down on the wooden floor.
As much as we'd like to think that we're entering an era of autonomous robots, they're actually still pretty helpless. To keep them from falling down all the time, a human's fast reflexes could be the solution. But the human has to feel what the robot is feeling -- and that's just what these researchers are testing. Bipedal robots are excellent in theory for navigating human environments, but naturally are more prone to falling than quadrupedal or wheeled robots. Although they often have sophisticated algorithms that help keep them upright, in some situations those just might not be enough.
If you've ever thought turning on your microwave or vacuum cleaner was too hard, the solution may be as easy as spending $2,900 on a robotic dog that will do it for you. That's the operating theory behind Aibo, a robotic pet canine created by Sony, which was released last year. Sony has been continually adding features to Aibo and the latest features will allow the small robotic dog to communicate with a range of household smart appliances to help make life easier for its owners. According to a report from Gizmodo, Sony hosted a demonstration of the new features at the CEATEC show in Tokyo, Japan's largest IT and electronics trade show. One example showed Aibo communicating wirelessly with a smart microwave, telling it to start cooking a snack as soon as its owners come home from a long day.
You probably picture robots as clodhoppers: ponderous, clunky, even doddery droids that need caffeine, badly. But robots are on the brink of making giant strides. Just ask Columbia University engineering professor Hod Lipson, who writes in Nature that "young animals gallop across fields, climb trees, and immediately find their feet with grace after they fall"--and robots are set to follow suit. A new breed of speedy robots promises to eventually outdo the runners at the 2020 Tokyo Olympics. Notable cybernetic contenders include MIT's dominant Cheetah, Boston Dynamics' Petman and Handle, Michigan Robotics' MABEL, and--further afield in South Africa--the University of Cape Town's Baleka. Plus, that efficiency-geared Florida University powerhouse, the Institute for Human & Machine Cognition (IHMC), fields a smart, sensor-free biped plainly called Planar Elliptical Runner (PER).
GITAI is a robotics startup with offices in Japan and the United States that's developing tech to put humanoid telepresence robots in space to take over for astronauts. Today, GITAI is announcing a joint research agreement with JAXA (the Japanese Aerospace Exploration Agency) to see what it takes for robots to be useful in orbit, with the goal of substantially reducing the amount of money spent sending food and air up to those demanding humans on the International Space Station. It's also worth noting that GITAI has some new hires, including folks from the famous (and somewhat mysterious) Japanese bipedal robot company SCHAFT. A quick reminder about SCHAFT: The company was founded by members of the JSK Laboratory at the University of Tokyo in order to build a robot to compete in the DARPA Robotics Challenge Trials in 2013. SCHAFT won the DRC Trials by a substantial margin, scoring 27 points out of a possible 32, 7 more points than the second place team (IHMC).
Sheer cliff faces present a traversal challenge for most wheeled robots on the market, but researchers at the University of Tokyo say they've developed a two-robot framework that works pretty reliably in their testing. In a newly published paper on the preprint server Arxiv.org "[We] propose a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device," the authors of the paper explain. "[It enhances] the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether." The UGV is permanently attached via mechanized winch and cable to the UAV, a custom-made quadcopter with an Nvidia Jetson TX2 chipset, a flight controller, and a raft of sensors including a modular fisheye camera, time-of-flight sensor, inertial measurement unit (IMU), and laser sensor.
Sony has made Aibo much harder to resist for people who love chocolate-colored dogs. The tech giant has launched a tricolor version of its robotic canine with two shades of brown, and it's now available for pre-order in Japan. Since it's pretty much just a recolored release with no differences in hardware and software, it also costs 198,000 JPY (US$1,800) like the original Aibo, not including taxes and subscription fees. Sony promises to roll out a new security feature now that it has teamed up with security firm Secom, though. According to Engadget Japanese, the company will offer a 1,480 yen-a-month security feature in Japan starting in June, which will take advantage the robot's capabilities to recognize faces and to create indoor maps.