fabrication
Image2Gcode: Image-to-G-code Generation for Additive Manufacturing Using Diffusion-Transformer Model
Wang, Ziyue, Jadhav, Yayati, Pak, Peter, Farimani, Amir Barati
Mechanical design and manufacturing workflows conventionally begin with conceptual design, followed by the creation of a computer-aided design (CAD) model and fabrication through material-extrusion (MEX) printing. This process requires converting CAD geometry into machine-readable G-code through slicing and path planning. While each step is well established, dependence on CAD modeling remains a major bottleneck: constructing object-specific 3D geometry is slow and poorly suited to rapid prototyping. Even minor design variations typically necessitate manual updates in CAD software, making iteration time-consuming and difficult to scale. To address this limitation, we introduce Image2Gcode, an end-to-end data-driven framework that bypasses the CAD stage and generates printer-ready G-code directly from images and part drawings. Instead of relying on an explicit 3D model, a hand-drawn or captured 2D image serves as the sole input. The framework first extracts slice-wise structural cues from the image and then employs a denoising diffusion probabilistic model (DDPM) over G-code sequences. Through iterative denoising, the model transforms Gaussian noise into executable print-move trajectories with corresponding extrusion parameters, establishing a direct mapping from visual input to native toolpaths. By producing structured G-code directly from 2D imagery, Image2Gcode eliminates the need for CAD or STL intermediates, lowering the entry barrier for additive manufacturing and accelerating the design-to-fabrication cycle. This approach supports on-demand prototyping from simple sketches or visual references and integrates with upstream 2D-to-3D reconstruction modules to enable an automated pipeline from concept to physical artifact. The result is a flexible, computationally efficient framework that advances accessibility in design iteration, repair workflows, and distributed manufacturing.
Monolithic Units: Actuation, Sensing, and Simulation for Integrated Soft Robot Design
Exley, Trevor, Nardin, Anderson Brazil, Trunin, Petr, Cafiso, Diana, Beccai, Lucia
This work introduces the Monolithic Unit (MU), an actuator-lattice-sensor building block for soft robotics. The MU integrates pneumatic actuation, a compliant lattice envelope, and candidate sites for optical waveguide sensing into a single printed body. In order to study reproducibility and scalability, a parametric design framework establishes deterministic rules linking actuator chamber dimensions to lattice unit cell size. Experimental homogenization of lattice specimens provides effective material properties for finite element simulation. Within this simulation environment, sensor placement is treated as a discrete optimization problem, where a finite set of candidate waveguide paths derived from lattice nodes is evaluated by introducing local stiffening, and the configuration minimizing deviation from baseline mechanical response is selected. Optimized models are fabricated and experimentally characterized, validating the preservation of mechanical performance while enabling embedded sensing. The workflow is further extended to scaled units and a two-finger gripper, demonstrating generality of the MU concept. This approach advances monolithic soft robotic design by combining reproducible co-design rules with simulation-informed sensor integration.
- South America > Brazil (0.40)
- Europe > Italy (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
Human-AI Co-Embodied Intelligence for Scientific Experimentation and Manufacturing
Lin, Xinyi, Zhang, Yuyang, Gan, Yuanhang, Chen, Juntao, Shen, Hao, He, Yichun, Li, Lijun, Yuan, Ze, Wang, Shuang, Wang, Chaohao, Zhang, Rui, Li, Na, Liu, Jia
Scientific experiment and manufacture rely on complex, multi-step procedures that demand continuous human expertise for precise execution and decision-making. Despite advances in machine learning and automation, conventional models remain confined to virtual domains, while real-world experiment and manufacture still rely on human supervision and expertise. This gap between machine intelligence and physical execution limits reproducibility, scalability, and accessibility across scientific and manufacture workflows. Here, we introduce human-AI co-embodied intelligence, a new form of physical AI that unites human users, agentic AI, and wearable hardware into an integrated system for real-world experiment and intelligent manufacture. In this paradigm, humans provide precise execution and control, while agentic AI contributes memory, contextual reasoning, adaptive planning, and real-time feedback. The wearable interface continuously captures the experimental and manufacture processes, facilitates seamless communication between humans and AI for corrective guidance and interpretable collaboration. As a demonstration, we present Agentic-Physical Experimentation (APEX) system, coupling agentic reasoning with physical execution through mixed-reality. APEX observes and interprets human actions, aligns them with standard operating procedures, provides 3D visual guidance, and analyzes every step. Implemented in a cleanroom for flexible electronics fabrication, APEX system achieves context-aware reasoning with accuracy exceeding general multimodal large language models, corrects errors in real time, and transfers expertise to beginners. These results establish a new class of agentic-physical-human intelligence that extends agentic reasoning beyond computation into the physical domain, transforming scientific research and manufacturing into autonomous, traceable, interpretable, and scalable processes.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Asia > Middle East > Jordan (0.04)
- Workflow (0.90)
- Research Report > New Finding (0.68)
Hallucination Benchmark for Speech Foundation Models
Koudounas, Alkis, La Quatra, Moreno, Giollo, Manuel, Siniscalchi, Sabato Marco, Baralis, Elena
Hallucinations in automatic speech recognition (ASR) systems refer to fluent and coherent transcriptions produced by neural ASR models that are completely unrelated to the underlying acoustic input (i.e., the speech signal). While similar to conventional decoding errors in potentially compromising the usability of transcriptions for downstream applications, hallucinations can be more detrimental due to their preservation of syntactically and semantically plausible structure. This apparent coherence can mislead subsequent processing stages and introduce serious risks, particularly in critical domains such as healthcare and law. Conventional evaluation metrics are primarily centered on error-based metrics and fail to distinguish between phonetic inaccuracies and hallucinations. Consequently, there is a critical need for new evaluation frameworks that can effectively identify and assess models with a heightened propensity for generating hallucinated content. To this end, we introduce SHALLOW, the first benchmark framework that systematically categorizes and quantifies hallucination phenomena in ASR along four complementary axes: lexical, phonetic, morphological, and semantic. We define targeted metrics within each category to produce interpretable profiles of model behavior. Through evaluation across various architectures and speech domains, we have found that SHALLOW metrics correlate strongly with word error rate (WER) when recognition quality is high (i.e., low WER). Still, this correlation weakens substantially as WER increases. SHALLOW, therefore, captures fine-grained error patterns that WER fails to distinguish under degraded and challenging conditions. Our framework supports specific diagnosis of model weaknesses and provides feedback for model improvement beyond what aggregate error rates can offer.
- Asia > India > Maharashtra > Mumbai (0.04)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (8 more...)
Hierarchical Discrete Lattice Assembly: An Approach for the Digital Fabrication of Scalable Macroscale Structures
Smith, Miana, Richard, Paul Arthur, Kyaw, Alexander Htet, Gershenfeld, Neil
Although digital fabrication processes at the desktop scale have become proficient and prolific, systems aimed at producing larger-scale structures are still typically complex, expensive, and unreliable. In this work, we present an approach for the fabrication of scalable macroscale structures using simple robots and interlocking lattice building blocks. A target structure is first voxelized so that it can be populated with an architected lattice. These voxels are then grouped into larger interconnected blocks, which are produced using standard digital fabrication processes, leveraging their capability to produce highly complex geometries at a small scale. These blocks, on the size scale of tens of centimeters, are then fed to mobile relative robots that are able to traverse over the structure and place new blocks to form structures on the meter scale. To facilitate the assembly of large structures, we introduce a live digital twin simulation tool for controlling and coordinating assembly robots that enables both global planning for a target structure and live user design, interaction, or intervention. To improve assembly throughput, we introduce a new modular assembly robot, designed for hierarchical voxel handling. We validate this system by demonstrating the voxelization, hierarchical blocking, path planning, and robotic fabrication of a set of meter-scale objects.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.30)
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- (11 more...)
- Construction & Engineering (0.68)
- Materials > Construction Materials (0.46)
Planning Jerk-Optimized Trajectory with Discrete-Time Constraints for Redundant Robots
Dai, Chengkai, Lefebvre, Sylvain, Yu, Kai-Ming, Geraedts, Jo M. P., Wang, Charlie C. L.
--We present a method for effectively planning the motion trajectory of robots in manufacturing tasks, the tool-paths of which are usually complex and have a large number of discrete time constraints as waypoints. Kinematic redundancy also exists in these robotic systems. The jerk of motion is optimized in our trajectory planning method at the meanwhile of fabrication process to improve the quality of fabrication. Our method is based on a sampling strategy and consists of two major parts. After determining an initial path by graph-search, a greedy algorithm is adopted to optimize a path by locally applying adaptive filers in the regions with large jerks. The filtered result is obtained by numerical optimization. In order to achieve efficient computation, an adaptive sampling method is developed for learning a collision-indication function that is represented as a support-vector machine. Applications in robot-assisted 3D printing are given in this paper to demonstrate the functionality of our approach. Abstract --In robot-assisted manufacturing applications, robotic arms are employed to realize the motion of workpieces (or machining tools) specified as a sequence of waypoints with the positions of tool tip and the tool orientations constrained. The required degree-of-freedom (DOF) is often less than the robotic hardware system (e.g., a robotic arm has 6-DOF). Specifically, rotations of the workpiece around the axis of a tool can be arbitrary (see Figure 1 for an example). By using this redundancy - i.e., there are many possible poses of a robotic arm to realize a given waypoint, the trajectory of robots can be optimized to consider the performance of motion in velocity, acceleration and jerk in the joint space. In addition, when fabricating complex models each tool-path can have a large amount of waypoints. It is crucial for a motion planning algorithm to compute a smooth and collision-free trajectory of robot to improve fabrication quality. The time taken by the planning algorithm should not significantly lengthen the total manufacturing time; ideally it would remain hidden as computing motions for a layer can be done while the previous layer is printing. The method presented in this paper provides an efficient framework to tackle this problem. The framework has been well tested on our robot-assisted additive manufacturing system to demonstrate its effectiveness and can be generally applied to other robot-assisted manufacturing systems.
- Asia > China > Hong Kong (0.05)
- Europe > Netherlands > South Holland > Delft (0.05)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- (5 more...)
MELEGROS: Monolithic Elephant-inspired Gripper with Optical Sensors
Trunin, Petr, Cafiso, Diana, Nardin, Anderson Brazil, Exley, Trevor, Beccai, Lucia
The elephant trunk exemplifies a natural gripper where structure, actuation, and sensing are seamlessly integrated. Inspired by the distal morphology of the African elephant trunk, we present MELEGROS, a Monolithic ELEphant-inspired GRipper with Optical Sensors, emphasizing sensing as an intrinsic, co-fabricated capability. Unlike multi-material or tendon-based approaches, MELEGROS directly integrates six optical waveguide sensors and five pneumatic chambers into a pneumatically actuated lattice structure (12.5 mm cell size) using a single soft resin and one continuous 3D print. This eliminates mechanical mismatches between sensors, actuators, and body, reducing model uncertainty and enabling simulation-guided sensor design and placement. Only four iterations were required to achieve the final prototype, which features a continuous structure capable of elongation, compression, and bending while decoupling tactile and proprioceptive signals. MELEGROS (132 g) lifts more than twice its weight, performs bioinspired actions such as pinching, scooping, and reaching, and delicately grasps fragile items like grapes. The integrated optical sensors provide distinct responses to touch, bending, and chamber deformation, enabling multifunctional perception. MELEGROS demonstrates a new paradigm for soft robotics where fully embedded sensing and continuous structures inherently support versatile, bioinspired manipulation.
- North America > United States (0.04)
- South America > Brazil (0.04)
- Europe > Italy (0.04)
- (9 more...)
A Software-Only Post-Processor for Indexed Rotary Machining on GRBL-Based CNCs
Portugal, Pedro, Venghaus, Damian D., Lopez, Diego
Affordable desktop CNC routers are common in education, prototyping, and makerspaces, but most lack a rotary axis, limiting fabrication of rotationally symmetric or multi - sided parts. Existing solutions often require hardware retrofits, alternative control lers, or commercial CAM software, raising cost and complexity. This work presents a software - only framework for indexed rotary machining on GRBL - based CNCs. A custom post - processor converts planar toolpaths into discrete rotary steps, executed through a br owser - based interface. While not equivalent to continuous 4 - axis machining, the method enables practical rotary - axis fabrication using only standard, off - the - shelf mechanics, without firmware modification. By reducing technical and financial barriers, the framework expands access to multi - axis machining in classrooms, makerspaces, and small workshops, supporting hands - on learning and rapid prototyping.
- North America > Mexico > Querétaro (0.04)
- Europe > Netherlands > South Holland > Rotterdam (0.04)
- Europe > Portugal (0.04)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Software (0.90)
Rapid Manufacturing of Lightweight Drone Frames Using Single-Tow Architected Composites
Khan, Md Habib Ullah, Deng, Kaiyue, Khan, Ismail Mujtaba, Fu, Kelvin
The demand for lightweight and high-strength composite structures is rapidly growing in aerospace and robotics, particularly for optimized drone frames. However, conventional composite manufacturing methods struggle to achieve complex 3D architectures for weight savings and rely on assembling separate components, which introduce weak points at the joints. Additionally, maintaining continuous fiber reinforcement remains challenging, limiting structural efficiency. In this study, we demonstrate the lightweight Face Centered Cubic (FFC) lattice structured conceptualization of drone frames for weight reduction and complex topology fabrication through 3D Fiber Tethering (3DFiT) using continuous single tow fiber ensuring precise fiber alignment, eliminating weak points associated with traditional composite assembly. Mechanical testing demonstrates that the fabricated drone frame exhibits a high specific strength of around four to eight times the metal and thermoplastic, outperforming other conventional 3D printing methods. The drone frame weighs only 260 g, making it 10% lighter than the commercial DJI F450 frame, enhancing structural integrity and contributing to an extended flight time of three minutes, while flight testing confirms its stability and durability under operational conditions. The findings demonstrate the potential of single tow lattice truss-based drone frames, with 3DFiT serving as a scalable and efficient manufacturing method.
- Materials (1.00)
- Aerospace & Defense (1.00)
- Energy (0.94)
- (4 more...)
Programming tension in 3D printed networks inspired by spiderwebs
Masmeijer, Thijs, Swain, Caleb, Hill, Jeff, Habtour, Ed
Each element in tensioned structural networks -- such as tensegrity, architectural fabrics, or medical braces/meshes -- requires a specific tension level to achieve and maintain the desired shape, stability, and compliance. These structures are challenging to manufacture, 3D print, or assemble because flattening the network during fabrication introduces multiplicative inaccuracies in the network's final tension gradients. This study overcomes this challenge by offering a fabrication algorithm for direct 3D printing of such networks with programmed tension gradients, an approach analogous to the spinning of spiderwebs. The algorithm: (i) defines the desired network and prescribes its tension gradients using the force density method; (ii) converts the network into an unstretched counterpart by numerically optimizing vertex locations toward target element lengths and converting straight elements into arcs to resolve any remaining error; and (iii) decomposes the network into printable toolpaths; Optional additional steps are: (iv) flattening curved 2D networks or 3D networks to ensure 3D printing compatibility; and (v) automatically resolving any unwanted crossings introduced by the flattening process. The proposed method is experimentally validated using 2D unit cells of viscoelastic filaments, where accurate tension gradients are achieved with an average element strain error of less than 1.0\%. The method remains effective for networks with element minimum length and maximum stress of 5.8 mm and 7.3 MPa, respectively. The method is used to demonstrate the fabrication of three complex cases: a flat spiderweb, a curved mesh, and a tensegrity system. The programmable tension gradient algorithm can be utilized to produce compact, integrated cable networks, enabling novel applications such as moment-exerting structures in medical braces and splints.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Utah > Utah County > Provo (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Machinery > Industrial Machinery (0.69)
- Health & Medicine > Therapeutic Area (0.46)