In a building several stories tall with numerous rooms, hundreds of obstacles and thousands of places to inspect, the several dozen robots move as one cohesive unit. They spread out in a search pattern to thoroughly check the entire building while simultaneously splitting tasks so as to not waste time doubling back on their own paths or re-checking places other robots have already visited. Such cohesion would be difficult for human controllers to achieve, let alone for an artificial controller to compute in real-time. "If a control problem has three or four robots that live in a world with only a handful of rooms, and if the collaborative task is specified by simple logic rules, there are state-of-the-art tools that can compute an optimal solution that satisfies the task in a reasonable amount of time," said Michael M. Zavlanos, the Mary Milus Yoh and Harold L. Yoh, Jr. Associate Professor of Mechanical Engineering and Materials Science at Duke University. "And if you don't care about the best solution possible, you can solve for a few more rooms and more complex tasks in a matter of minutes, but still only a dozen robots tops," Zavlanos said.
In this episode, we hear from Luca Colasanto, Senior Robotic Scientist at Realtime Robotics, about real-time robot motion planning in dynamic and complex environments with human-robot collaboration. Realtime Robotics focuses on accelerating conventional motion planning through optimization of algorithms and hardware to allow safe use of robotic tools in work areas with humans. Luca spoke to our interviewer Kate about Realtime Robotic's fast motion planning technology, including key aspects, such as perception, algorithms and custom hardware. Luca Colasanto is a Sr. Scientist at Realtime Robotics focusing on AI-based grasping and multi-robot optimization. Luca completed his PhD in Humanoid Robotics at Italian Institute of Technology, focusing on control systems for bipedal walking machines and compliant actuators.
Researchers from Université de Sherbrooke in Canada have created a robot arm that can be used by humans which has the dexterity to pick fruit. The type of arm developed is called a supernumerary robotic arm, in that it does not supplement any existing human limb but adds an extra one to the body. The arm has three degrees of freedom and is controlled by a hydraulic system connected to the user through a tether and controlled by another human being.
A team of researchers from Université de Sherbrooke in Canada have created a badass, waist-mounted hydraulic arm that's capable of smashing through walls, IEEE Spectrum reports. A video uploaded by the team shows the robotic arm in action. It can move heavy power tools, paint walls, pick vegetables -- and even Hulk smash through drywall. The robotic arm can be remote controlled by a miniaturized version of the arm operated by a second person standing nearby. The user will not have to carry the weight of the entire machinery on their back thanks to a tether that connects the arm to bulkier machinery nearby.
Identifying algorithms that flexibly and efficiently discover temporally-extended multi-phase plans is an essential step for the advancement of robotics and model-based reinforcement learning. The core problem of long-range planning is finding an efficient way to search through the tree of possible action sequences. Existing non-learned planning solutions from the Task and Motion Planning (TAMP) literature rely on the existence of logical descriptions for the effects and preconditions for actions. This constraint allows TAMP methods to efficiently reduce the tree search problem but limits their ability to generalize to unseen and complex physical environments. In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances. However, DRL methods struggle to handle the very sparse reward landscapes inherent to long-range multi-step planning situations. Here, we propose the Curious Sample Planner (CSP), which fuses elements of TAMP and DRL by combining a curiosity-guided sampling strategy with imitation learning to accelerate planning. We show that CSP can efficiently discover interesting and complex temporally-extended plans for solving a wide range of physically realistic 3D tasks. In contrast, standard planning and learning methods often fail to solve these tasks at all or do so only with a huge and highly variable number of training samples. We explore the use of a variety of curiosity metrics with CSP and analyze the types of solutions that CSP discovers. Finally, we show that CSP supports task transfer so that the exploration policies learned during experience with one task can help improve efficiency on related tasks.
Manipulation and assembly tasks require non-trivial planning of actions depending on the environment and the final goal. Previous work in this domain often assembles particular instances of objects from known sets of primitives. In contrast, we here aim to handle varying sets of primitives and to construct different objects of the same shape category. Given a single object instance of a category, e.g. an arch, and a binary shape classifier, we learn a visual policy to assemble other instances of the same category. In particular, we propose a disassembly procedure and learn a state policy that discovers new object instances and their assembly plans in state space. We then render simulated states in the observation space and learn a heatmap representation to predict alternative actions from a given input image. To validate our approach, we first demonstrate its efficiency for building object categories in state space. We then show the success of our visual policies for building arches from different primitives. Moreover, we demonstrate (i) the reactive ability of our method to re-assemble objects using additional primitives and (ii) the robust performance of our policy for unseen primitives resembling building blocks used during training. Our visual assembly policies are trained with no real images and reach up to 95% success rate when evaluated on a real robot.
ActiNav From Universal Robots (URF) Is a New Ur Application Kit for Companies of All Sizes That Simplifies the Integration of Autonomous Bin Picking of Parts and Accurate Placement in Machines Using UR Cobots. ActiNav Synchronously Handles Vision Processing, Collision-Free Motion Planning and Autonomous Real-Time Robot Control, Eliminating the Complexity and Risk Usually Associated With Bin Picking Applications.The complexity of automated bin picking is well-known throughout the industry, requiring huge efforts in both integration and programming. Today, most bin picking products are solely focused on the vision aspect of bin picking and often require hundreds of lines of additional programming to bridge the gap from "pick" to "place" – especially if the "place" is not just dropping into a box or tote but accurately inserting the part into a fixture for further processing. ActiNav Autonomous Bin Picking changes all that, allowing manufacturers with limited or no bin picking deployment expertise to quickly achieve high machine uptime and accurate part placement with few operator interventions.ActiNav combines real-time autonomous motion control, collaborative robotics, vision and sensor systems in one easy to use, fast to deploy and cost-effective kit.