Beyond Terabit/s Integrated Neuromorphic Photonic Processor for DSP-Free Optical Interconnects
Wang, Benshan, Xiao, Qiarong, Xu, Tengji, Fan, Li, Liu, Shaojie, Dong, Jianji, Zhang, Junwen, Huang, Chaoran
–arXiv.org Artificial Intelligence
The rapid expansion of generative AI is driving unprecedented demands for high-performance computing. Training large-scale AI models now requires vast interconnected GPU clusters across multiple data centers. Multi-scale AI training and inference demand uniform, ultra-low latency, and energy-efficient links to enable massive GPUs to function as a single cohesive unit. However, traditional electrical and optical interconnects, which rely on conventional digital signal processors (DSPs) for signal distortion compensation, are increasingly unable to meet these stringent requirements. To overcome these limitations, we present an integrated neuromorphic optical signal processor (OSP) that leverages deep reservoir computing and achieves DSP-free, all-optical, real-time processing. Experimentally, our OSP achieves a 100 Gbaud PAM4 per lane, 1.6 Tbit/s data center interconnect over a 5 km optical fiber in the C-band (equivalent to over 80 km optical fiber in the O-band), far exceeding the reach of state-of-the-art DSP solutions, which are fundamentally constrained by chromatic dispersion 1 arXiv:2504.15044v1 Simultaneously, it delivers a four-orders-of-magnitude reduction in processing latency and a three-orders-of-magnitude reduction in energy consumption. Unlike DSPs, which introduce increased latency at high data rates, our OSP maintains consistent, ultra-low latency regardless of data rate scaling, making it an ideal solution for future optical interconnects. Moreover, the OSP retains full optical field information for better impairment compensation and adapts to various modulation formats, data rates, and wavelengths. Fabricated using a mature silicon photonic process, the OSP can be monolithically integrated with silicon photonic transceivers, enhancing the compactness and reliability of all-optical interconnects. This research provides a highly scalable, energy-efficient, and high-speed solution, paving the way for next-generation AI infrastructure. Keywords: Photonic neural network, optical interconnect, AI infrastructure, data center 1 Introduction The surging demand for artificial intelligence and machine learning (AI/ML), especially in generative AI, has driven unprecedented requirements for high-performance computing infrastructure.
arXiv.org Artificial Intelligence
Apr-22-2025
- Country:
- Asia > China
- Hong Kong (0.04)
- Hubei Province > Wuhan (0.04)
- Shanghai > Shanghai (0.04)
- North America > United States
- California > San Diego County > San Diego (0.04)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Energy (0.68)
- Information Technology > Services (0.75)
- Technology: