Goto

Collaborating Authors

 computer-aided design




Computer-Aided Design as Language

Neural Information Processing Systems

Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars. These programs are complex and require years of training and experience to master. A component of all CAD models particularly difficult to make are the highly structured 2D sketches that lie at the heart of every 3D construction. In this work, we propose a machine learning model capable of automatically generating such sketches. Through this, we pave the way for developing intelligent tools that would help engineers create better designs with less effort. The core of our method is a combination of a general-purpose language modeling technique alongside an off-the-shelf data serialization protocol. Additionally, we explore several extensions allowing us to gain finer control over the generation process. We show that our approach has enough flexibility to accommodate the complexity of the domain and performs well for both unconditional synthesis and image-to-sketch translation.


GANGR: GAN-Assisted Scalable and Efficient Global Routing Parallelization

Jooshin, Hadi Khodaei, Partin-Vaisband, Inna

arXiv.org Artificial Intelligence

Abstract--Global routing is a critical stage in electronic design automation (EDA) that enables early estimation and optimization of the routability of modern integrated circuits with respect to congestion, power dissipation, and design complexity. Batching is a primary concern in top-performing global routers, grouping nets into manageable sets to enable parallel processing and efficient resource usage. This process improves memory usage, scalable parallelization on modern hardware, and routing congestion by controlling net interactions within each batch. However, conventional batching methods typically depend on heuristics that are computationally expensive and can lead to suboptimal results (oversized batches with conflicting nets, excessive batch counts degrading parallelization, and longer batch generation times), ultimately limiting scalability and efficiency. T o address these limitations, a novel batching algorithm enhanced with Wasserstein generative adversarial networks (WGANs) is introduced in this paper, enabling more effective parallelization by generating fewer higher-quality batches in less time. The proposed algorithm is tested on the latest ISPD'24 contest benchmarks, demonstrating up to 40% runtime reduction with only 0.002% degradation in routing quality as compared to state-of-the-art router .



LMM-IR: Large-Scale Netlist-Aware Multimodal Framework for Static IR-Drop Prediction

Ma, Kai, Wang, Zhen, He, Hongquan, Xu, Qi, Chen, Tinghuan, Geng, Hao

arXiv.org Artificial Intelligence

Abstract--Static IR drop analysis is a fundamental and critical task in the field of chip design. Nevertheless, this process can be quite time-consuming, potentially requiring several hours. Moreover, addressing IR drop violations frequently demands iterative analysis, thereby causing the computational burden. Therefore, fast and accurate IR drop prediction is vital for reducing the overall time invested in chip design. In this paper, we firstly propose a novel multimodal approach that efficiently processes SPICE files through large-scale netlist transformer (LNT). Our key innovation is representing and processing netlist topology as 3D point cloud representations, enabling efficient handling of netlist with up to hundreds of thousands to millions nodes. All types of data, including netlist files and image data, are encoded into latent space as features and fed into the model for static voltage drop prediction. This enables the integration of data from multiple modalities for complementary predictions. Experimental results demonstrate that our proposed algorithm can achieve the best F1 score and the lowest MAE among the winning teams of the ICCAD 2023 contest and the state-of-the-art algorithms.


Unitho: A Unified Multi-Task Framework for Computational Lithography

Jin, Qian, Liu, Yumeng, Jiang, Yuqi, Sun, Qi, Zhuo, Cheng

arXiv.org Artificial Intelligence

Abstract--Reliable, generalizable data foundations are critical for enabling large-scale models in computational lithography. However, essential tasks--mask generation, rule violation detection, and layout optimization--are often handled in isolation, hindered by scarce datasets and limited modeling approaches. T o address these challenges, we introduce Unitho, a unified multi-task large vision model built upon the Transformer architecture. Trained on a large-scale industrial lithography simulation dataset with hundreds of thousands of cases, Unitho supports end-to-end mask generation, lithography simulation, and rule violation detection. As process nodes continue to shrink, geometric distortions induced by photolithography, such as optical proximity effects (OPE), pose a growing challenge to device performance and manufacturing yield. To ensure that design layouts are transferred to the wafer with high fidelity, optical proximity correction (OPC) and subsequent lithography verification have become indispensable steps in the chip design workflow [1]. However, the industry-standard physics-based simulation, while accurate, is computationally intensive and time-consuming, as shown in Figure 1 This bottleneck is severely exacerbated during process window (PW) analysis, which requires validating design robustness under variations in focus and exposure dose. Since simulations must be repeated across the entire process matrix, the resulting computational overhead significantly prolongs design iteration cycles and severely impedes early-stage Design-Technology Co-Optimization (DTCO), as shown in Figure 1.