Goto

Collaborating Authors

 printing


How Invisalign Became the World's Biggest User of 3D Printers

WIRED

Joe Hogan, Align Technology's plastics-nerd CEO, says you shouldn't eat with your aligners and that you don't need to wear your retainers every night. Joe Hogan sees a lot of smiles. When people ask him where he works, he responds with "Align Technology," which inevitably prompts the follow up, "What's that?" After months, sometimes years, the discrete rival to braces promises to give people smiles they will want to show off. Hogan gets a look at them all. And he's eager to see more. Align is embarking on its biggest manufacturing overhaul since it was founded by two Stanford Graduate School of Business classmates 29 years ago. The company is preparing to begin directly 3D printing the aligners at the core of its business, ditching what Hogan describes as a longer, more wasteful process that involves making molds. A successful transition could lower costs and make treatment more affordable in the long run, bringing Invisalign to more customers and boosting Align's profits. It also, according to Hogan, would entrench Align as the world's biggest user of 3D printers .


Synthetic skin reveals hidden 'Mona Lisa' when exposed to heat

Popular Science

Technology Engineering Synthetic skin reveals hidden'Mona Lisa' when exposed to heat The octopus-inspired material could lead to better camouflage technology for the military and beyond. Breakthroughs, discoveries, and DIY tips sent six days a week. Octopuses and their cephalopod cousins have long fascinated biologists with their seemingly supernatural shapeshifting. The cephalopods rapidly change color and texture, blending into their surroundings and evading predators. This natural camouflage is a remarkable bit of biology that engineers have tried to replicate, albeit with limited success.



Image2Gcode: Image-to-G-code Generation for Additive Manufacturing Using Diffusion-Transformer Model

Wang, Ziyue, Jadhav, Yayati, Pak, Peter, Farimani, Amir Barati

arXiv.org Artificial Intelligence

Mechanical design and manufacturing workflows conventionally begin with conceptual design, followed by the creation of a computer-aided design (CAD) model and fabrication through material-extrusion (MEX) printing. This process requires converting CAD geometry into machine-readable G-code through slicing and path planning. While each step is well established, dependence on CAD modeling remains a major bottleneck: constructing object-specific 3D geometry is slow and poorly suited to rapid prototyping. Even minor design variations typically necessitate manual updates in CAD software, making iteration time-consuming and difficult to scale. To address this limitation, we introduce Image2Gcode, an end-to-end data-driven framework that bypasses the CAD stage and generates printer-ready G-code directly from images and part drawings. Instead of relying on an explicit 3D model, a hand-drawn or captured 2D image serves as the sole input. The framework first extracts slice-wise structural cues from the image and then employs a denoising diffusion probabilistic model (DDPM) over G-code sequences. Through iterative denoising, the model transforms Gaussian noise into executable print-move trajectories with corresponding extrusion parameters, establishing a direct mapping from visual input to native toolpaths. By producing structured G-code directly from 2D imagery, Image2Gcode eliminates the need for CAD or STL intermediates, lowering the entry barrier for additive manufacturing and accelerating the design-to-fabrication cycle. This approach supports on-demand prototyping from simple sketches or visual references and integrates with upstream 2D-to-3D reconstruction modules to enable an automated pipeline from concept to physical artifact. The result is a flexible, computationally efficient framework that advances accessibility in design iteration, repair workflows, and distributed manufacturing.


Optimal Safety-Aware Scheduling for Multi-Agent Aerial 3D Printing with Utility Maximization under Dependency Constraints

Stamatopoulos, Marios-Nektarios, Velhal, Shridhar, Banerjee, Avijit, Nikolakopoulos, George

arXiv.org Artificial Intelligence

Abstract--This article presents a novel coordination and task-planning framework to enable the simultaneous conflict-free collaboration of multiple unmanned aerial vehicles (UA Vs) for aerial 3D printing. The proposed framework formulates an optimization problem that takes a construction mission divided into sub-tasks and a team of autonomous UA Vs, along with limited volume and battery. It generates an optimal mission plan comprising task assignments and scheduling, while accounting for task dependencies arising from the geometric and structural requirements of the 3D design, inter-UA V safety constraints, material usage and total flight time of each UA V. The potential conflicts occurring during the simultaneous operation of the UA Vs are addressed at a segment-level by dynamically selecting the starting time and location of each task to guarantee collision-free parallel execution. An importance prioritization is proposed to accelerate the computation by guiding the solution towards more important tasks. Additionally, a utility maximization formulation is proposed to dynamically determine the optimal number of UA Vs required for a given mission, balancing the trade-off between minimizing makespan and the deployment of excess agents. The proposed framework's effectiveness is evaluated through a Gazebo-based simulation setup, where agents are coordinated by a mission control module allocating the printing tasks based on the generated optimal scheduling plan while remaining within the material and battery constraints of each UA V. A video of the whole mission is available in the following link: https://youtu.be/b4jwhkNPT Note to Practitioners--This framework addresses the critical need for efficiency and safety in planning and scheduling multiple aerial robots for parallel aerial 3D printing. Existing approaches lack safety guarantees for UA Vs during parallel construction. This work tackles these challenges by ensuring safety during parallel operations and effectively managing task dependencies.


How does 3D printing work?

Popular Science

Technology Engineering How does 3D printing work? Rapid prototyping is a relatively simple process that can be scaled up or down. Breakthroughs, discoveries, and DIY tips sent every weekday. Since 3D printers debuted in the 1980s, the devices have been used to build meat, chocolate, human organs, clothing, cars, and houses . It's more mainstream than ever, and you can buy a machine for less than $200.


Implicit Neural Field-Based Process Planning for Multi-Axis Manufacturing: Direct Control over Collision Avoidance and Toolpath Geometry

Dutta, Neelotpal, Zhang, Tianyu, Liu, Tao, Chen, Yongxue, Wang, Charlie C. L.

arXiv.org Artificial Intelligence

Existing curved-layer-based process planning methods for multi-axis manufacturing address collisions only indirectly and generate toolpaths in a post-processing step, leaving toolpath geometry uncontrolled during optimization. We present an implicit neural field-based framework for multi-axis process planning that overcomes these limitations by embedding both layer generation and toolpath design within a single differentiable pipeline. Using sinusoidally activated neural networks to represent layers and toolpaths as implicit fields, our method enables direct evaluation of field values and derivatives at any spatial point, thereby allowing explicit collision avoidance and joint optimization of manufacturing layers and toolpaths. We further investigate how network hyperparameters and objective definitions influence singularity behavior and topology transitions, offering built-in mechanisms for regularization and stability control. The proposed approach is demonstrated on examples in both additive and subtractive manufacturing, validating its generality and effectiveness.


From Prompts to Printable Models: Support-Effective 3D Generation via Offset Direct Preference Optimization

Wu, Chenming, Li, Xiaofan, Dai, Chengkai

arXiv.org Artificial Intelligence

The transition from digital 3D models to physical objects via 3D printing often requires support structures to prevent overhanging features from collapsing during the fabrication process. While current slicing technologies offer advanced support strategies, they focus on post-processing optimizations rather than addressing the underlying need for support-efficient design during the model generation phase. This paper introduces SEG (\textit{\underline{S}upport-\underline{E}ffective \underline{G}eneration}), a novel framework that integrates Direct Preference Optimization with an Offset (ODPO) into the 3D generation pipeline to directly optimize models for minimal support material usage. By incorporating support structure simulation into the training process, SEG encourages the generation of geometries that inherently require fewer supports, thus reducing material waste and production time. We demonstrate SEG's effectiveness through extensive experiments on two benchmark datasets, Thingi10k-Val and GPT-3DP-Val, showing that SEG significantly outperforms baseline models such as TRELLIS, DPO, and DRO in terms of support volume reduction and printability. Qualitative results further reveal that SEG maintains high fidelity to input prompts while minimizing the need for support structures. Our findings highlight the potential of SEG to transform 3D printing by directly optimizing models during the generative process, paving the way for more sustainable and efficient digital fabrication practices.


Whole-Body Proprioceptive Morphing: A Modular Soft Gripper for Robust Cross-Scale Grasping

Han, Dong Heon, Xu, Xiaohao, Chen, Yuxi, Zhou, Yusheng, Zhang, Xinqi, Wang, Jiaqi, Bruder, Daniel, Huang, Xiaonan

arXiv.org Artificial Intelligence

Abstract--Biological systems, such as the octopus, exhibit masterful cross-scale manipulation by adaptively reconfiguring their entire form, a capability that remains elusive in robotics. Conventional soft grippers, while compliant, are mostly constrained by a fixed global morphology, and prior shape-morphing efforts have been largely confined to localized deformations, failing to replicate this biological dexterity. Inspired by this natural exemplar, we introduce the paradigm of collaborative, whole-body proprioceptive morphing, realized in a modular soft gripper architecture. Our design is a distributed network of modular self-sensing pneumatic actuators that enables the gripper to intelligently reconfigure its entire topology, achieving multiple morphing states that are controllable to form diverse polygonal shapes. By integrating rich proprioceptive feedback from embedded sensors, our system can seamlessly transition from a precise pinch to a large envelope grasp. We experimentally demonstrate that this approach expands the grasping envelope and enhances generalization across diverse object geometries (standard and irregular) and scales (up to 10), while also unlocking novel manipulation modalities such as multi-object and internal hook grasping. This work presents a low-cost, easy-to-fabricate, and scalable framework that fuses distributed actuation with integrated sensing, offering a new pathway toward achieving biological levels of dexterity in robotic manipulation. This remarkable adaptability stems from their ability to perform whole-body proprioceptive morphing, i.e., a capability fundamentally absent in conventional robotics [1]-[4].


M3D-skin: Multi-material 3D-printed Tactile Sensor with Hierarchical Infill Structures for Pressure Sensing

Yoshimura, Shunnosuke, Kawaharazuka, Kento, Okada, Kei

arXiv.org Artificial Intelligence

Tactile sensors have a wide range of applications, from utilization in robotic grippers to human motion measurement. If tactile sensors could be fabricated and integrated more easily, their applicability would further expand. In this study, we propose a tactile sensor-M3D-skin-that can be easily fabricated with high versatility by leveraging the infill patterns of a multi-material fused deposition modeling (FDM) 3D printer as the sensing principle. This method employs conductive and non-conductive flexible filaments to create a hierarchical structure with a specific infill pattern. The flexible hierarchical structure deforms under pressure, leading to a change in electrical resistance, enabling the acquisition of tactile information. We measure the changes in characteristics of the proposed tactile sensor caused by modifications to the hierarchical structure. Additionally, we demonstrate the fabrication and use of a multi-tile sensor. Furthermore, as applications, we implement motion pattern measurement on the sole of a foot, integration with a robotic hand, and tactile-based robotic operations. Through these experiments, we validate the effectiveness of the proposed tactile sensor.