Goto

Collaborating Authors

 lego brick


Roborock Qrevo Curv 2 Flow Review: The Most Beautiful, Best Robot Vacuum

WIRED

Dirt detect and customizable cleaning solutions. SmartPlan AI still doesn't identify smaller objects. Roborock's Curv robot vacuum is probably the most attractive robot vacuum I've ever tested. The domed white docking station is attractive, convenient, and compact. It doesn't hurt that Roborock's navigation and cleaning systems are consistently the best among the robot vacuums I've tested.




APEX-MR: Multi-Robot Asynchronous Planning and Execution for Cooperative Assembly

Huang, Philip, Liu, Ruixuan, Liu, Changliu, Li, Jiaoyang

arXiv.org Artificial Intelligence

Compared to a single-robot workstation, a multi-robot system offers several advantages: 1) it expands the system's workspace, 2) improves task efficiency, and more importantly, 3) enables robots to achieve significantly more complex and dexterous tasks, such as cooperative assembly. However, coordinating the tasks and motions of multiple robots is challenging due to issues, e.g. system uncertainty, task efficiency, algorithm scalability, and safety concerns. To address these challenges, this paper studies multi-robot coordination and proposes APEX-MR, an asynchronous planning and execution framework designed to safely and efficiently coordinate multiple robots to achieve cooperative assembly, e.g. LEGO assembly. In particular, APEX-MR provides a systematic approach to post-process multi-robot tasks and motion plans to enable robust asynchronous execution under uncertainty. Experimental results demonstrate that APEX-MR can significantly speed up the execution time of many long-horizon LEGO assembly tasks by 48% compared to sequential planning and 36% compared to synchronous planning on average. To further demonstrate the performance, we deploy APEX-MR to a dual-arm system to perform physical LEGO assembly. To our knowledge, this is the first robotic system capable of performing customized LEGO assembly using commercial LEGO bricks. The experiment results demonstrate that the dual-arm system, with APEX-MR, can safely coordinate robot motions, efficiently collaborate, and construct complex LEGO structures. Our project website is available at https://intelligent-control-lab.github.io/APEX-MR/


Eye-in-Finger: Smart Fingers for Delicate Assembly and Disassembly of LEGO

Tang, Zhenran, Liu, Ruixuan, Liu, Changliu

arXiv.org Artificial Intelligence

-- Manipulation and insertion of small and tight-toleranced objects in robotic assembly remain a critical challenge for vision-based robotics systems due to the required precision and cluttered environment. Conventional global or wrist-mounted cameras often suffer from occlusions when either assembling or disassembling from an existing structure. T o address the challenge, this paper introduces "Eye-in-Finger", a novel tool design approach that enhances robotic manipulation by embedding low-cost, high-resolution perception directly at the tool tip. We validate our approach using LEGO assembly and disassembly tasks, which require the robot to manipulate in a cluttered environment and achieve sub-millimeter accuracy and robust error correction due to the tight tolerances. Experimental results demonstrate that our proposed system enables real-time, fine corrections to alignment error, increasing the tolerance of calibration error from 0.4mm to up to 2.0mm for the LEGO manipulation robot. Humans rely on vision for overall spatial perception but depend on tactile sensing for fine-grained, high-precision interactions [1]. For example, when threading a needle, placing a microchip on a circuit board, or performing delicate sutures in surgery, visual guidance provides an initial estimate, while tactile feedback refines positioning and detailed operations.


Physics-Aware Combinatorial Assembly Planning using Deep Reinforcement Learning

Liu, Ruixuan, Chen, Alan, Zhao, Weiye, Liu, Changliu

arXiv.org Artificial Intelligence

Combinatorial assembly uses standardized unit primitives to build objects that satisfy user specifications. Lego is a widely used platform for combinatorial assembly, in which people use unit primitives (ie Lego bricks) to build highly customizable 3D objects. This paper studies sequence planning for physical combinatorial assembly using Lego. Given the shape of the desired object, we want to find a sequence of actions for placing Lego bricks to build the target object. In particular, we aim to ensure the planned assembly sequence is physically executable. However, assembly sequence planning (ASP) for combinatorial assembly is particularly challenging due to its combinatorial nature, ie the vast number of possible combinations and complex constraints. To address the challenges, we employ deep reinforcement learning to learn a construction policy for placing unit primitives sequentially to build the desired object. Specifically, we design an online physics-aware action mask that efficiently filters out invalid actions and guides policy learning. In the end, we demonstrate that the proposed method successfully plans physically valid assembly sequences for constructing different Lego structures. The generated construction plan can be executed in real.


Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling

Zheng, Huangjie, Wang, Zhendong, Yuan, Jianbo, Ning, Guanghan, He, Pengcheng, You, Quanzeng, Yang, Hongxia, Zhou, Mingyuan

arXiv.org Machine Learning

Diffusion models excel at generating photo-realistic images but come with significant computational costs in both training and sampling. While various techniques address these computational challenges, a less-explored issue is designing an efficient and adaptable network backbone for iterative refinement. Current options like U-Net and Vision Transformer often rely on resource-intensive deep networks and lack the flexibility needed for generating images at variable resolutions or with a smaller network than used in training. This study introduces LEGO bricks, which seamlessly integrate Local-feature Enrichment and Global-content Orchestration. These bricks can be stacked to create a test-time reconfigurable diffusion backbone, allowing selective skipping of bricks to reduce sampling costs and generate higher-resolution images than the training data. LEGO bricks enrich local regions with an MLP and transform them using a Transformer block while maintaining a consistent full-resolution image across all bricks. Experimental results demonstrate that LEGO bricks enhance training efficiency, expedite convergence, and facilitate variable-resolution image generation while maintaining strong generative performance. Moreover, LEGO significantly reduces sampling time compared to other methods, establishing it as a valuable enhancement for diffusion models.


Visual AI and Linguistic Intelligence Through Steerability and Composability

Noever, David, Noever, Samantha Elizabeth Miller

arXiv.org Artificial Intelligence

This study explores the capabilities of multimodal large language models (LLMs) in handling challenging multistep tasks that integrate language and vision, focusing on model steerability, composability, and the application of long-term memory and context understanding. The problem addressed is the LLM's ability (Nov 2023 GPT-4 Vision Preview) to manage tasks that require synthesizing visual and textual information, especially where stepwise instructions and sequential logic are paramount. The research presents a series of 14 creatively and constructively diverse tasks, ranging from AI Lego Designing to AI Satellite Image Analysis, designed to test the limits of current LLMs in contexts that previously proved difficult without extensive memory and contextual understanding. Key findings from evaluating 800 guided dialogs include notable disparities in task completion difficulty. For instance, 'Image to Ingredient AI Bartender' (Low difficulty) contrasted sharply with 'AI Game Self-Player' (High difficulty), highlighting the LLM's varying proficiency in processing complex visual data and generating coherent instructions. Tasks such as 'AI Genetic Programmer' and 'AI Negotiator' showed high completion difficulty, emphasizing challenges in maintaining context over multiple steps. The results underscore the importance of developing LLMs that combine long-term memory and contextual awareness to mimic human-like thought processes in complex problem-solving scenarios.


A Lightweight and Transferable Design for Robust LEGO Manipulation

Liu, Ruixuan, Sun, Yifan, Liu, Changliu

arXiv.org Artificial Intelligence

LEGO is a well-known platform for prototyping pixelized objects. However, robotic LEGO prototyping (i.e. manipulating LEGO bricks) is challenging due to the tight connections and accuracy requirement. This paper investigates safe and efficient robotic LEGO manipulation. In particular, this paper reduces the complexity of the manipulation by hardware-software co-design. An end-of-arm tool (EOAT) is designed, which reduces the problem dimension and allows large industrial robots to easily manipulate LEGO bricks. In addition, this paper uses evolution strategy to safely optimize the robot motion for LEGO manipulation. Experiments demonstrate that the EOAT performs reliably in manipulating LEGO bricks and the learning framework can effectively and safely improve the manipulation performance to a 100% success rate. The co-design is deployed to multiple robots (i.e. FANUC LR-mate 200id/7L and Yaskawa GP4) to demonstrate its generalizability and transferability. In the end, we show that the proposed solution enables sustainable robotic LEGO prototyping, in which the robot can repeatedly assemble and disassemble different prototypes.


Simulation-aided Learning from Demonstration for Robotic LEGO Construction

Liu, Ruixuan, Chen, Alan, Luo, Xusheng, Liu, Changliu

arXiv.org Artificial Intelligence

Recent advancements in manufacturing have a growing demand for fast, automatic prototyping (i.e. assembly and disassembly) capabilities to meet users' needs. This paper studies automatic rapid LEGO prototyping, which is devoted to constructing target LEGO objects that satisfy individual customization needs and allow users to freely construct their novel designs. A construction plan is needed in order to automatically construct the user-specified LEGO design. However, a freely designed LEGO object might not have an existing construction plan, and generating such a LEGO construction plan requires a non-trivial effort since it requires accounting for numerous constraints (e.g. object shape, colors, stability, etc.). In addition, programming the prototyping skill for the robot requires the users to have expert programming skills, which makes the task beyond the reach of the general public. To address the challenges, this paper presents a simulation-aided learning from demonstration (SaLfD) framework for easily deploying LEGO prototyping capability to robots. In particular, the user demonstrates constructing the customized novel LEGO object. The robot extracts the task information by observing the human operation and generates the construction plan. A simulation is developed to verify the correctness of the learned construction plan and the resulting LEGO prototype. The proposed system is deployed to a FANUC LR-mate 200id/7L robot. Experiments demonstrate that the proposed SaLfD framework can effectively correct and learn the prototyping (i.e. assembly and disassembly) tasks from human demonstrations. And the learned prototyping tasks are realized by the FANUC robot.