Goto

Collaborating Authors

 Zhang, Peiyu


HDLCoRe: A Training-Free Framework for Mitigating Hallucinations in LLM-Generated HDL

arXiv.org Artificial Intelligence

Recent advances in large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, when applied to hardware description languages (HDL), these models exhibit significant limitations due to data scarcity, resulting in hallucinations and incorrect code generation. To address these challenges, we propose HDLCoRe, a training-free framework that enhances LLMs' HDL generation capabilities through prompt engineering techniques and retrieval-augmented generation (RAG). Our approach consists of two main components: (1) an HDL-aware Chain-of-Thought (CoT) prompting technique with self-verification that classifies tasks by complexity and type, incorporates domainspecific knowledge, and guides LLMs through step-by-step self-simulation for error correction; and (2) a two-stage heterogeneous RAG system that addresses formatting inconsistencies through key component extraction and efficiently retrieves relevant HDL examples through sequential filtering and re-ranking. HDLCoRe eliminates the need for model fine-tuning while substantially improving LLMs' HDL generation capabilities. Experimental results demonstrate that our framework achieves superior performance on the RTLLM2.0 With the rapid advancement of semiconductor technology, the design of very large-scale integration (VLSI) has become increasingly vital across industries Huang et al. (2021). Hardware description language (HDL) code, as the foundation of VLSI design, plays a critical role in defining the circuit architecture and functionality Palnitkar (2003). In recent years, large language models (LLMs) have experienced explosive growth and demonstrated extraordinary capabilities in many aspects Kanakaris et al. (2025); Li et al. (2025), especially in automated code generation Brown et al. (2020); Chen et al. (2021).


ClimateLLM: Efficient Weather Forecasting via Frequency-Aware Large Language Models

arXiv.org Artificial Intelligence

Weather forecasting is crucial for public safety, disaster prevention and mitigation, agricultural production, and energy management, with global relevance. Although deep learning has significantly advanced weather prediction, current methods face critical limitations: (i) they often struggle to capture both dynamic temporal dependencies and short-term abrupt changes, making extreme weather modeling difficult; (ii) they incur high computational costs due to extensive training and resource requirements; (iii) they have limited adaptability to multi-scale frequencies, leading to challenges when separating global trends from local fluctuations. To address these issues, we propose ClimateLLM, a foundation model for weather forecasting. It captures spatiotemporal dependencies via a cross-temporal and cross-spatial collaborative modeling framework that integrates Fourier-based frequency decomposition with Large Language Models (LLMs) to strengthen spatial and temporal modeling. Our framework uses a Mixture-of-Experts (MoE) mechanism that adaptively processes different frequency components, enabling efficient handling of both global signals and localized extreme events. In addition, we introduce a cross-temporal and cross-spatial dynamic prompting mechanism, allowing LLMs to incorporate meteorological patterns across multiple scales effectively. Extensive experiments on real-world datasets show that ClimateLLM outperforms state-of-the-art approaches in accuracy and efficiency, as a scalable solution for global weather forecasting. For almost half a century, numerical weather prediction (NWP) methods that rely on solving atmospheric partial differential equations have formed the backbone of operational forecasting Kalnay (2002); Lynch (2008); Bauer et al. (2015); Nguyen et al. (2024).


End-to-End Learning Framework for Solving Non-Markovian Optimal Control

arXiv.org Artificial Intelligence

Integer-order calculus often falls short in capturing the long-range dependencies and memory effects found in many real-world processes. Fractional calculus addresses these gaps via fractional-order integrals and derivatives, but fractional-order dynamical systems pose substantial challenges in system identification and optimal control due to the lack of standard control methodologies. In this paper, we theoretically derive the optimal control via linear quadratic regulator (LQR) for fractional-order linear time-invariant (FOLTI) systems and develop an end-to-end deep learning framework based on this theoretical foundation. Our approach establishes a rigorous mathematical model, derives analytical solutions, and incorporates deep learning to achieve data-driven optimal control of FOLTI systems. Our key contributions include: (i) proposing an innovative system identification method control strategy for FOLTI systems, (ii) developing the first end-to-end data-driven learning framework, Fractional-Order Learning for Optimal Control (FOLOC), that learns control policies from observed trajectories, and (iii) deriving a theoretical analysis of sample complexity to quantify the number of samples required for accurate optimal control in complex real-world problems. Experimental results indicate that our method accurately approximates fractional-order system behaviors without relying on Gaussian noise assumptions, pointing to promising avenues for advanced optimal control.


A structure-aware framework for learning device placements on computation graphs

arXiv.org Artificial Intelligence

Existing approaches for device placement ignore the topological features of computation graphs and rely mostly on heuristic methods for graph partitioning. At the same time, they either follow a grouper-placer or an encoder-placer architecture, which requires understanding the interaction structure between code operations. To bridge the gap between encoder-placer and grouper-placer techniques, we propose a novel framework for the task of device placement, relying on smaller computation graphs extracted from the OpenVINO toolkit using reinforcement learning. The framework consists of five steps, including graph coarsening, node representation learning and policy optimization. It facilitates end-to-end training and takes into consideration the directed and acyclic nature of the computation graphs. We also propose a model variant, inspired by graph parsing networks and complex network analysis, enabling graph representation learning and personalized graph partitioning jointly, using an unspecified number of groups. To train the entire framework, we utilize reinforcement learning techniques by employing the execution time of the suggested device placements to formulate the reward. We demonstrate the flexibility and effectiveness of our approach through multiple experiments with three benchmark models, namely Inception-V3, ResNet, and BERT. The robustness of the proposed framework is also highlighted through an ablation study. The suggested placements improve the inference speed for the benchmark models by up to $58.2\%$ over CPU execution and by up to $60.24\%$ compared to other commonly used baselines.


Topology-aware Tensor Decomposition for Meta-graph Learning

arXiv.org Artificial Intelligence

Heterogeneous graphs generally refers to graphs with different types of nodes and edges. A common approach for extracting useful information from heterogeneous graphs is to use meta-graphs, which can be seen as a special kind of directed acyclic graph (DAG) with same node and edge types as the heterogeneous graph. However, how to design proper meta-graphs is challenging. Recently, there have been many works on learning suitable meta-graphs from a heterogeneous graph. Existing methods generally introduce continuous weights for edges that are independent of each other, which ignores the topological stucture of meta-graphs and can be ineffective. To address this issue, we propose a new viewpoint from tensor on learning meta-graphs. Such a viewpoint not only helps interpret the limitation of existing works by CANDECOMP/PARAFAC (CP) decomposition, but also inspires us to propose a topology-aware tensor decomposition, called TENSUS, that reflects the structure of DAGs. The proposed topology-aware tensor decomposition is easy to use and simple to implement, and it can be taken as a plug-in part to upgrade many existing works, including node classification and recommendation on heterogeneous graphs. Experimental results on different tasks demonstrate that the proposed method can significantly improve the state-of-the-arts for all these tasks.