Robot Planning & Action


Sampling-Based Robot Motion Planning

Communications of the ACM

In recent years, robots play an active role in everyday life: medical robots assist in complex surgeries; search-and-rescue robots are employed in mining accidents; and low-cost commercial robots clean houses. There is a growing need for sophisticated algorithmic tools enabling stronger capabilities for these robots. One fundamental problem that robotic researchers grapple with is motion planning--which deals with planning a collision-free path for a moving system in an environment cluttered with obstacles.13,29 To a layman, it may seem the wide use of robots in modern life implies that the motion-planning problem has already been solved. This is far from true.


Motion Planning Explorer: Visualizing Local Minima using a Local-Minima Tree

arXiv.org Artificial Intelligence

Motion Planning Explorer: Visualizing Local Minima using a Local-Minima Tree Andreas Orthey, Benjamin Fr esz, Marc Toussaint Abstract -- We present an algorithm to visualize local minima in a motion planning problem, which we call the motion planning explorer . The input to the explorer is a planning problem, a sequence of lower-dimensional projections of the configuration space, a cost functional and an optimization method. The output is a local-minima tree, which is interactively grown based on user input. We show the motion planning explorer to faithfully capture the structure of four real-world scenarios. I NTRODUCTION In motion planning, we develop algorithms to move robots from an initial configuration to a desired goal configuration. Such algorithms are essential for manufacturing, autonomous flight, computer animation or protein folding [14]. Most motion planning algorithms are black-box algorithms 1 . A user inputs a goal configuration and the algorithm returns a motion. In real-world scenarios, however, black-box algorithms are problematic. There is no way to guide or prevent motions. Humans users cannot visualize the internal mechanism of the algorithm.


'Robot Arm' assisted operations and AI techniques to dominate future: Apollo Doc

#artificialintelligence

Hyderabad, Sep 8: Dr Mithin Aachi, Sr Ortho & Joint Replacement Surgeon, Apollo Hospitals, Secunderabad on Sunday said that'Robot Arm' assisted navigated knee replacement and Artificial Intelligence Machine Learning, definitely would dominate in future for patient care, there would be an exponential increase in their usage in the years to come. Talking to media on the occasion of 5th edition of Arthroplasty Arthrocopy Summit, organized by Apollo Hospitals under the aegis of Telangana Orthopaedic Surgeon's Association (TOSA) and Twin Cities Orthopaedic Society (TCOS), here, Dr Aachi along with Dr N Somashekar Reddy, Sr Ortho & Joint Replacement Surgeon, Apollo Hospitals said that a Robot doesn't do surgery by itself but it helps the surgeon plan, execute and achieve a better surgical result than he (doctor) would have by human effort alone. Hence Robot surgery in less pain, better outcomes and longer survival rates of joint replacement surgery, he said. Explaining further on Robot surgery, Dr Aachi said usually for a Robot assisted surgery patient undergoes CT scan of the knee to correctly update the surgeon about the deformity and the thickness of bone cuts required and the implant sizes to be placed. This is enhanced planning which gives the surgeon an idea of the type of bone cuts required and the size of implants which will correctly match that patient on an individual level.


IRT Seminar : "The Challenges of Integrative Heterogeneous AI : Illustration on the Integration of Task and Motion Planning" by Malik Ghallab - 17 sept. 2019 - Toulouse

#artificialintelligence

Interesting scientific controversies have been going for the last few years on recent trends in AI, sometime referred to as the "swing from symbolic to numeric AI". Such a dichotomy might not be very clarifying, nor does it seem very relevant for the advancement of the field. If one views and practices AI research as a multidisciplinary investigation whose purpose is the computational modeling and mechanization of a diversity of cognitive tasks (that may require embodiment in sensory-motor capabilities), then one has to face the challenges of integrating a diversity of mathematically heterogeneous representations. A single class of models highly adequate, for example, for data association can be totally ineffective for other purposes, such as, here, extracting and reasoning on the underlying ontology. The ambition of Integrative Heterogeneous AI is precisely to develop approaches and architectures capable of handling heterogeneous representations for complex tasks.


The challenges of integrative heterogeneous AI: illustration on the integration of task and motion planning by Malik Ghallab, emeritus researcher at CNRS - IRT Saint Exupéry

#artificialintelligence

The presentation will motivate the challenges of Integrative AI and illustrate them on a very active research issue in the AI&Robotics community, which is the integration of motion and task planning. The basic techniques and representations for the two separate problems of task planning and motion planning will be briefly introduced, as well as the practical need for their integration. The main integrative approaches currently being explored will be surveyed.


The AI of F.E.A.R. - Goal Oriented Action Planning AI and Games

#artificialintelligence

A short video to accompany my article that de-constructs the AI of the video game F.E.A.R. No copyright is claimed for the game footage, images and music, to the extent that material may appear to be infringed, I assert that such alleged infringement is permissible under fair use principles in copyright laws. If you believe material has been used in an unauthorized manner, please contact me.


Robot arm uses bacteria in its fingers to "taste" its environment

#artificialintelligence

By embedding engineered bacteria into the fingers of a robot arm, researchers have created a biohybrid bot that can "taste" -- and they think it could lead to a future in which robots are better equipped to respond to the world around them. For their study, which was published in the journal Science Robotics on Wednesday, a team from the University of California, Davis, and Carnegie Mellon University engineered E. coli bacteria to produce a fluorescent protein when it encountered the chemical IPTG. They then placed the engineered bacteria into wells built into a robot arm's flexible grippers. Finally, they covered the wells with a porous membrane that would keep the bacteria in place while letting liquids reach the cells. To test the system, the researchers had the arm reach into a water bath that sometimes contained IPTG.


Anyone can program this cheap robot arm in just 15 minutes

#artificialintelligence

Automata, a robotics firm in London, thinks it can fix this lag in uptake. Its robotic arm costs just $7,500 and is sold under the name Eva (yes, it is named after the robot in WALL-E). The company hopes to widen access to robots by focusing only on the more basic functions that small firms actually need. It is backed by $9.5 million from several investors, including robotics giant ABB.


Rapidly-Exploring Quotient-Space Trees: Motion Planning using Sequential Simplifications

arXiv.org Artificial Intelligence

Motion planning problems can be simplified by admissible projections of the configuration space to sequences of lower-dimensional quotient-spaces, called sequential simplifications. To exploit sequential simplifications, we present the Quotient-space Rapidly-exploring Random Trees (QRRT) algorithm. QRRT takes as input a start and a goal configuration, and a sequence of quotient-spaces. The algorithm grows trees on the quotient-spaces both sequentially and simultaneously to guarantee a dense coverage. QRRT is shown to be (1) probabilistically complete, and (2) can reduce the runtime by at least one order of magnitude. However, we show in experiments that the runtime varies substantially between different quotient-space sequences. To find out why, we perform an additional experiment, showing that the more narrow an environment, the more a quotient-space sequence can reduce runtime.


Explaining intuitive difficulty judgments by modeling physical effort and risk

arXiv.org Artificial Intelligence

The ability to estimate task difficulty is critical for many real-world decisions such as setting appropriate goals for ourselves or appreciating others' accomplishments. Here we give a computational account of how humans judge the difficulty of a range of physical construction tasks (e.g., moving 10 loose blocks from their initial configuration to their target configuration, such as a vertical tower) by quantifying two key factors that influence construction difficulty: physical effort and physical risk. Physical effort captures the minimal work needed to transport all objects to their final positions, and is computed using a hybrid task-and-motion planner. Physical risk corresponds to stability of the structure, and is computed using noisy physics simulations to capture the costs for precision (e.g., attention, coordination, fine motor movements) required for success. We show that the full effort-risk model captures human estimates of difficulty and construction time better than either component alone.