Ford experiments with four-legged robots, to scout factories. The aim is to save time and money. The Ford Media Center presented the procedure on 26 July 2020 as follows: "Ford is tapping four-legged robots at its Van Dyke Transmission Plant in early August to laser scan the plant, helping engineers update the original computer-aided design which is used when we are getting ready to retool our plants. These robots can be deployed into tough-to-reach areas within the plant to scan the area with laser scanners and high-definition cameras, collecting data used to retool plants, saving Ford engineers time and money. Ford is leasing two robots, nicknamed Fluffy and Spot, from Boston Dynamics – a company known for building sophisticated mobile robots."
Ford Motor Co. digital engineer Paula Wiebelhaus takes Fluffy outside for walks in the yard. Her cats hide from him. He has his own spot in a corner of the bedroom where after a run, he plugs into his charger to reenergize.Fluffy is no typical dog. The bright-yellow creature is a nimble four-legged robot adopted by the Dearborn automaker to crawl around its facilities to take 3D laser images that engineers use to redesign and retool its plants. Using the robotic dog is less clunky than the traditional way, could save time and money, and may help bring new products to market sooner, said Mark Goderis, digital engineering manager at Ford's Advanced Manufacturing Center.
Based in Hangzhou, outside Shanghai, Unitree Robotics was founded in 2017 by Xing Wang with the mission of making legged robots as popular and affordable as smartphones and drones are today. In a showcase of the heavily Boston Dynamics-inspired company's recent progress, it has released a video showing its four-legged robot A1 balancing in a yoga-like pose. "Marc Raibert … is my idol," Wang once told IEEE Spectrum about the president and founder of Boston Dynamics. While the famous robotics company serves as inspiration for Unitree Robotics, the Chinese company wants to make "make quadruped robots simpler and smaller, so that they can help ordinary people with things like carrying objects or as companions," Wang told IEE Spectrum. In order to instill this accessibility into their A1 robot, Unitree Robotics, made it weigh only 12 kg -- just under half the weight of Boston Dynamics' Spot robot, which weighs 25 kg.
Robots – some of us love them, some of us worry they are going to take our jobs, and some of us are worried they are going to take over the world. Because of increasingly sophisticated manufacturing processes, and increasingly powerful AI, robots – humanoid or otherwise – are becoming more capable, useful, and – yes – sometimes scary. The first commercial, industrial robots were often immobile, fixed in place, and focused on one task, often working on a production line. Today, however, we're increasingly likely to find them walking on two legs just like us, move around on caterpillar tracks, or even flying through the skies. Some of them are put to work, carrying out boring, repetitive or dangerous tasks to save us from having to do them ourselves.
While four-legged robots can achieve impressive feats, like pulling an airplane or climbing a fence, they still have a few limitations. In most cases, they need a fairly large surface to walk on. A team of Italian robotics researchers is looking to change that. They've created a robotic controller that allows a quadruped robot to walk across a thin beam. The four-legged robot can balance on just two legs and walk heel-toe.
Robotmaker Boston Dynamics has finally put its four-legged robot Spot on general sale. After years of development, the company began leasing the machine to businesses last year, and, as of today, is now letting any US firm buy their very own Spot for $74,500. It's a hefty price tag, equal to the base price for a luxury Tesla Model S. But Boston Dynamics says, for that money, you're getting the most advanced mobile robot in the world, able to go pretty much anywhere a human can (as long as there are no ladders involved). Although Spot is certainly nimble, its workload is mostly limited right now to surveying and data collection. Trial deployments have seen Spot create 3D maps of construction sites and hunt for machine faults in offshore oil rigs.
Whether it's a dog chasing after a ball, or a monkey swinging through the trees, animals can effortlessly perform an incredibly rich repertoire of agile locomotion skills. But designing controllers that enable legged robots to replicate these agile behaviors can be a very challenging task. The superior agility seen in animals, as compared to robots, might lead one to wonder: can we create more agile robotic controllers with less effort by directly imitating animals? In this work, we present a framework for learning robotic locomotion skills by imitating animals. Given a reference motion clip recorded from an animal (e.g. a dog), our framework uses reinforcement learning to train a control policy that enables a robot to imitate the motion in the real world.
Sony's robotic Aibo pup continues to learn new tricks. Thanks to a new software update, the android companion will now predict when you come home and sit patiently at the front door. According to Sony's website, you'll first need to assign a meeting place -- the entrance to your front home -- by saying a phrase like "this is where you should go." Aibo should then lower its head and'sniff' the ground to indicate that it's storing the location. If the process is successful, a door icon should appear on the map located inside the companion app.
A numeric representation of uncertain and incomplete sensor knowledge called certainty grids was used successfully in several recent mobile robot control programs developed at the Carnegie-Mellon University Mobile Robot Laboratory (MRL). Certainty grids have proven to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. MRL had good early success with ad hoc formulas for updating grid cells with new information. A new Bayesian statistical foundation for the operations promises further improvement. MRL proposes to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves.
This article informally examines the role of stored internal state (that is, memory) in the control of autonomous mobile robots. The difficulties associated with using stored internal state are reviewed. It is argued that the underlying cause of these problems is the implicit predictions contained within the state, and, therefore, many of the problems can be solved by taking care that the internal state contains information only about predictable aspects of the environment. One way of accomplishing this is to maintain internal state only at a high level of abstraction. The resulting information can be used to guide the actions of a robot but should not be used to control these actions directly; local sensor information is still necessary for immediate control.