Peng, Zedong
DeepCircuitX: A Comprehensive Repository-Level Dataset for RTL Code Understanding, Generation, and PPA Analysis
Li, Zeju, Xu, Changran, Shi, Zhengyuan, Peng, Zedong, Liu, Yi, Zhou, Yunhao, Zhou, Lingfeng, Ma, Chengyu, Zhong, Jianyuan, Wang, Xi, Zhao, Jieru, Chu, Zhufei, Yang, Xiaoyan, Xu, Qiang
This paper introduces DeepCircuitX, a comprehensive repository-level dataset designed to advance RTL (Register Transfer Level) code understanding, generation, and power-performance-area (PPA) analysis. Unlike existing datasets that are limited to either file-level RTL code or physical layout data, DeepCircuitX provides a holistic, multilevel resource that spans repository, file, module, and block-level RTL code. This structure enables more nuanced training and evaluation of large language models (LLMs) for RTL-specific tasks. DeepCircuitX is enriched with Chain of Thought (CoT) annotations, offering detailed descriptions of functionality and structure at multiple levels. These annotations enhance its utility for a wide range of tasks, including RTL code understanding, generation, and completion. Additionally, the dataset includes synthesized netlists and PPA metrics, facilitating early-stage design exploration and enabling accurate PPA prediction directly from RTL code. We demonstrate the dataset's effectiveness on various LLMs finetuned with our dataset and confirm the quality with human evaluations. Our results highlight DeepCircuitX as a critical resource for advancing RTL-focused machine learning applications in hardware design automation.Our data is available at https://zeju.gitbook.io/lcm-team.
MPAX: Mathematical Programming in JAX
Lu, Haihao, Peng, Zedong, Yang, Jinwen
Mathematical programming has long served as foundation across numerous fields, such as operations research, economics, and engineering, providing powerful robust tools for optimization and decision-making. Recently, these techniques have also found significant applications in machine learning. Notable examples include datadriven decision making [9, 16], learning with physical constraints [8, 10], learning to rank [7], end-to-end planning and control [2], etc. The efficiency and effectiveness of these machine learning approaches depend largely on the rapid processing of large-scale datasets, facilitated by parallel hardware accelerators such as graphics processing units (GPUs). In contrast, traditional approaches to mathematical programming are not well suited for machine learning tasks. Broadly, there are two major paradigms for integrating mathematical programming with machine learning.