A numeric representation of uncertain and incomplete sensor knowledge called certainty grids was used successfully in several recent mobile robot control programs developed at the Carnegie-Mellon University Mobile Robot Laboratory (MRL). Certainty grids have proven to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems.
This article informally examines the role of stored internal state (that is, memory) in the control of autonomous mobile robots. The difficulties associated with using stored internal state are reviewed. It is argued that the underlying cause of these problems is the implicit predictions contained within the state, and, therefore, many of the problems can be solved by taking care that the internal state contains information only about predictable aspects of the environment. One way of accomplishing this is to maintain internal state only at a high level of abstraction. The resulting information can be used to guide the actions of a robot but should not be used to control these actions directly; local sensor information is still necessary for immediate control.
An insect-like machine with six individually powered legs that was intended for the battlefield received millions in US government funding during the 1980s. The project to create a fleet of real-life AT-AT walkers from Star Wars started in 1981 at Ohio State University, as the military searched for ways to traverse rough terrain that wheeled vehicles couldn't manage. Called the Adaptive Suspension Vehicle (ASV), the bizarre vehicle was part of a decade-long project which was eventually scrapped after receiving a reported $1million a year from Darpa between 1981 and 1990. The fate of the ASV is a mystery, with nobody knowing whether it is in storage somewhere or was scrapped decades ago. Professors Robert McGhee and Kenneth Waldron at Ohio State University wrote a scientific paper explaining their project in 1986.
Joe Wieciek, Software Ops Manager at Brain Corp, shares the five most important lessons learned from powering the world's largest fleet of autonomous mobile robots (AMRs) operating in commercial indoor public spaces AI software technology today is used to build autonomous robots for the retail industry, malls, airports, hospitals and more. At Brain Corp, our groundbreaking work with our manufacturing partners has helped us build and sell several autonomous mobile robots (AMR) across several verticals and brands. Once the robots are deployed, our software operations team works diligently to ensure that every BrainOS -enabled robot performs well in the field, collecting data and insights via the cloud that we use to improve our software and systems, and ultimately create better user experiences. However, managing a handful of robots in the field is drastically different from managing a large global fleet. We learned that the hard way on our path to powering the largest fleet in the world of autonomous mobile robots (AMRs) operating in commercial indoor public spaces.
Scientists have been able to create an army of tiny, walking robots in a new breakthrough. The objects are the first microscopic robots that are made out of semiconductor components. That allows them to be controlled and forced to walk with standard electronic signals, allowing them to be integrated into more traditional circuits. The researchers behind the discovery now hope that they can be built into even more complex versions. That could allow for future robots to be able to be controlled by computer chips, produced en masse – and built in such a way that they could travel through human tissue and blood, acting like surgeons, the researchers say.
Tasks in the pharmaceutical, life sciences and biomedical industries have always been time-consuming and complex. With the advent of the Covid-19 pandemic, these undertakings will only grow in complexity. To ensure speed, accuracy and mitigate the infectivity stress among the humans, robots are called upon to meet the ever-increasing range of workflows in today's research and development laboratories. Laboratory automation, drug discovery and pharmaceutical manufacturing are emerging fields where the services of robots are leveraged for research and development. Robotic lab assistants help researchers and scientists focus on high-level tasks like the analysis of potential therapeutic compounds rather than mundanely mixing compounds to determine their curative characteristics.
Modern Reinforcement Learning (RL) algorithms promise to solve difficult motor control problems directly from raw sensory inputs. Their attraction is due in part to the fact that they can represent a general class of methods that allow to learn a solution with a reasonably set reward and minimal prior knowledge, even in situations where it is difficult or expensive for a human expert. For RL to truly make good on this promise, however, we need algorithms and learning setups that can work across a broad range of problems with minimal problem specific adjustments or engineering. In this paper, we study this idea of generality in the locomotion domain. We develop a learning framework that can learn sophisticated locomotion behavior for a wide spectrum of legged robots, such as bipeds, tripeds, quadrupeds and hexapods, including wheeled variants. Our learning framework relies on a data-efficient, off-policy multi-task RL algorithm and a small set of reward functions that are semantically identical across robots. To underline the general applicability of the method, we keep the hyper-parameter settings and reward definitions constant across experiments and rely exclusively on on-board sensing. For nine different types of robots, including a real-world quadruped robot, we demonstrate that the same algorithm can rapidly learn diverse and reusable locomotion skills without any platform specific adjustments or additional instrumentation of the learning setup.
Ford experiments with four-legged robots, to scout factories. The aim is to save time and money. The Ford Media Center presented the procedure on 26 July 2020 as follows: "Ford is tapping four-legged robots at its Van Dyke Transmission Plant in early August to laser scan the plant, helping engineers update the original computer-aided design which is used when we are getting ready to retool our plants. These robots can be deployed into tough-to-reach areas within the plant to scan the area with laser scanners and high-definition cameras, collecting data used to retool plants, saving Ford engineers time and money. Ford is leasing two robots, nicknamed Fluffy and Spot, from Boston Dynamics – a company known for building sophisticated mobile robots."
Ford Motor Co. digital engineer Paula Wiebelhaus takes Fluffy outside for walks in the yard. Her cats hide from him. He has his own spot in a corner of the bedroom where after a run, he plugs into his charger to reenergize.Fluffy is no typical dog. The bright-yellow creature is a nimble four-legged robot adopted by the Dearborn automaker to crawl around its facilities to take 3D laser images that engineers use to redesign and retool its plants. Using the robotic dog is less clunky than the traditional way, could save time and money, and may help bring new products to market sooner, said Mark Goderis, digital engineering manager at Ford's Advanced Manufacturing Center.
Based in Hangzhou, outside Shanghai, Unitree Robotics was founded in 2017 by Xing Wang with the mission of making legged robots as popular and affordable as smartphones and drones are today. In a showcase of the heavily Boston Dynamics-inspired company's recent progress, it has released a video showing its four-legged robot A1 balancing in a yoga-like pose. "Marc Raibert … is my idol," Wang once told IEEE Spectrum about the president and founder of Boston Dynamics. While the famous robotics company serves as inspiration for Unitree Robotics, the Chinese company wants to make "make quadruped robots simpler and smaller, so that they can help ordinary people with things like carrying objects or as companions," Wang told IEE Spectrum. In order to instill this accessibility into their A1 robot, Unitree Robotics, made it weigh only 12 kg -- just under half the weight of Boston Dynamics' Spot robot, which weighs 25 kg.