LightCode: Compiling LLM Inference for Photonic-Electronic Systems
Tomich, Ryan, Zhong, Zhizhen, Englund, Dirk
–arXiv.org Artificial Intelligence
The growing demand for low-latency, energy-efficient inference in large language models (LLMs) has catalyzed interest in heterogeneous architectures. While GPUs remain dominant, they are poorly suited for integration with emerging domain-specific accelerators like the Photonic Tensor Units (PTUs), which offer low-power, high-throughput linear computation. This motivates hybrid compilation strategies that combine photonic and electronic resources. We present LightCode, a compiler framework and simulator for mapping LLM inference workloads across hybrid photonic-electronic systems. LightCode introduces the Stacked Graph, an intermediate representation that encodes multiple hardware-specific realizations of each tensor operation. Hardware assignment is formulated as a constrained subgraph selection problem optimized for latency or energy under parametric cost models. We evaluate LightCode on the prefill stage of GPT-2 and Llama-7B showing that under our workload and hardware assumptions, (i) Photonic hardware reduced energy by up to 50% in our simulated workloads at maximum sequence length; (ii) multiplexing and assignment strategy yielded latency improvements exceeding 10x; and (iii) Optimizing for latency or energy resulted in distinct hardware mappings in our simulations. LightCode offers a module, foundational framework and simulator for compiling LLMs to emerging photonic accelerators.
arXiv.org Artificial Intelligence
Sep-23-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California > San Diego County
- La Jolla (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- California > San Diego County
- Canada > Ontario
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: