optical network
Generative AI-Driven Hierarchical Multi-Agent Framework for Zero-Touch Optical Networks
Zhang, Yao, Song, Yuchen, Li, Shengnan, Shi, Yan, Shen, Shikui, Tang, Xiongyan, Zhang, Min, Wang, Danshi
The rapid development of Generative Artificial Intelligence (GenAI) has catalyzed a transformative technological revolution across all walks of life. As the backbone of wideband communication, optical networks are expecting high-level autonomous operation and zero-touch management to accommodate their expanding network scales and escalating transmission bandwidth. The integration of GenAI is deemed as the pivotal solution for realizing zero-touch optical networks. However, the lifecycle management of optical networks involves a multitude of tasks and necessitates seamless collaboration across multiple layers, which poses significant challenges to the existing single-agent GenAI systems. In this paper, we propose a GenAI-driven hierarchical multi-agent framework designed to streamline multi-task autonomous execution for zero-touch optical networks. We present the architecture, implementation, and applications of this framework. A field-deployed mesh network is utilized to demonstrate three typical scenarios throughout the lifecycle of optical network: quality of transmission estimation in the planning stage, dynamic channel adding/dropping in the operation stage, and system capacity increase in the upgrade stage. The case studies, illustrate the capabilities of multi-agent framework in multi-task allocation, coordination, execution, evaluation, and summarization. This work provides a promising approach for the future development of intelligent, efficient, and collaborative network management solutions, paving the way for more specialized and adaptive zero-touch optical networks.
- Workflow (0.70)
- Research Report > Promising Solution (0.34)
Bridging Language Models and Formal Methods for Intent-Driven Optical Network Design
Bekri, Anis, Abane, Amar, Battou, Abdella, Bensalem, Saddek
Abstract--Intent-Based Networking (IBN) aims to simplify network management by enabling users to specify high-level goals that drive automated network design and configuration. However, translating informal natural-language intents into formally correct optical network topologies remains challenging due to inherent ambiguity and lack of rigor in Large Language Models (LLMs). T o address this, we propose a novel hybrid pipeline that integrates LLM-based intent parsing, formal methods, and Optical Retrieval-Augmented Generation (RAG). By enriching design decisions with domain-specific optical standards and systematically incorporating symbolic reasoning and verification techniques, our pipeline generates explainable, verifiable, and trustworthy optical network designs. Intent-Based Networking (IBN) simplifies network management by allowing users to express high-level objectives--such as connectivity, performance, or security--without specifying implementation details [1], [2]. Standardization bodies like TM Forum and the Internet Engineering Task Force define intent as a declarative statement of desired outcomes, delegating the detailed configuration and implementation tasks to automated systems. By abstracting away low-level complexities, IBN significantly reduces operational overhead, human error, and management complexity [2]. Existing research predominantly explores intent translation into configurations or incremental topology adjustments [3], [4], but largely overlooks the initial phase of comprehensive network design, particularly for optical networks. Poor initial design decisions can lead to significant performance degradation or expensive reconfigurations throughout the operational lifecycle [5], [6].
- North America > United States (0.14)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
From Data to Decision: A Multi-Stage Framework for Class Imbalance Mitigation in Optical Network Failure Analysis
Ali, Yousuf Moiz, Prilepsky, Jaroslaw E., Sambo, Nicola, Pedro, Joao, Hosseini, Mohammad M., Napoli, Antonio, Turitsyn, Sergei K., Freire, Pedro
Machine learning-based failure management in optical networks has gained significant attention in recent years. However, severe class imbalance, where normal instances vastly outnumber failure cases, remains a considerable challenge. While pre- and in-processing techniques have been widely studied, post-processing methods are largely unexplored. In this work, we present a direct comparison of pre-, in-, and post-processing approaches for class imbalance mitigation in failure detection and identification using an experimental dataset. For failure detection, post-processing methods-particularly Threshold Adjustment-achieve the highest F1 score improvement (up to 15.3%), while Random Under-Sampling provides the fastest inference. In failure identification, GenAI methods deliver the most substantial performance gains (up to 24.2%), whereas post-processing shows limited impact in multi-class settings. When class overlap is present and latency is critical, over-sampling methods such as the SMOTE are most effective; without latency constraints, Meta-Learning yields the best results. In low-overlap scenarios, Generative AI approaches provide the highest performance with minimal inference time.
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > United Kingdom > England > West Midlands > Birmingham (0.04)
- (5 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > United Kingdom > England > West Midlands > Birmingham (0.04)
- Europe > Portugal (0.04)
- (3 more...)
Reinforcement Learning with Graph Attention for Routing and Wavelength Assignment with Lightpath Reuse
Doherty, Michael, Beghelli, Alejandra
Many works have investigated reinforcement learning (RL) for routing and spectrum assignment on flex-grid networks but only one work to date has examined RL for fixed-grid with flex-rate transponders, despite production systems using this paradigm. Flex-rate transponders allow existing lightpaths to accommodate new services, a task we term routing and wavelength assignment with lightpath reuse (RWA-LR). We re-examine this problem and present a thorough benchmarking of heuristic algorithms for RWA-LR, which are shown to have 6% increased throughput when candidate paths are ordered by number of hops, rather than total length. We train an RL agent for RWA-LR with graph attention networks for the policy and value functions to exploit the graph-structured data. We provide details of our methodology and open source all of our code for reproduction. We outperform the previous state-of-the-art RL approach by 2.5% (17.4 Tbps mean additional throughput) and the best heuristic by 1.2% (8.5 Tbps mean additional throughput). This marginal gain highlights the difficulty in learning effective RL policies on long horizon resource allocation tasks.
Reinforcement Learning for Dynamic Resource Allocation in Optical Networks: Hype or Hope?
Doherty, Michael, Matzner, Robin, Sadeghi, Rasoul, Bayvel, Polina, Beghelli, Alejandra
The application of reinforcement learning (RL) to dynamic resource allocation in optical networks has been the focus of intense research activity in recent years, with almost 100 peer-reviewed papers. We present a review of progress in the field, and identify significant gaps in benchmarking practices and reproducibility. To determine the strongest benchmark algorithms, we systematically evaluate several heuristics across diverse network topologies. We find that path count and sort criteria for path selection significantly affect the benchmark performance. We meticulously recreate the problems from five landmark papers and apply the improved benchmarks. Our comparisons demonstrate that simple heuristics consistently match or outperform the published RL solutions, often with an order of magnitude lower blocking probability. Furthermore, we present empirical lower bounds on network blocking using a novel defragmentation-based method, revealing that potential improvements over the benchmark heuristics are limited to 19--36\% increased traffic load for the same blocking performance in our examples. We make our simulation framework and results publicly available to promote reproducible research and standardized evaluation https://doi.org/10.5281/zenodo.12594495.
- Asia > China > Hong Kong (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (29 more...)
- Overview (1.00)
- Research Report > New Finding (0.46)
Non-linear Equalization in 112 Gb/s PONs Using Kolmogorov-Arnold Networks
Fischer, Rodrigo, Matalla, Patrick, Randel, Sebastian, Schmalen, Laurent
They currently serve the majority of fiber broadband subscribers worldwide and an ongoing demand for bandwidth has led to recent standardization efforts that enabled 50 Gb/s line rate transmission [1], while the research community is investigating the technologies that will enable PONs beyond 100 Gb/s [2]. One possibility for achieving 100 Gb/s is the use of higher-order modulation formats in intensity-modulation and direct-detection (IM/DD) links. However, this comes at the cost of an increased signal-to-noise ratio (SNR) requirement and lower tolerance to non-linearities in the channel. In a PON, the semiconductor optical amplifiers (SOAs) used to improve the receiver sensitivity suffer from non-linear gain saturation and the electro-absorption modulator (EAM) responsible for modulating the intensity of the optical signal has a non-linear transfer function.
When Large Language Models Meet Optical Networks: Paving the Way for Automation
Wang, Danshi, Wang, Yidi, Jiang, Xiaotian, Zhang, Yao, Pang, Yue, Zhang, Min
Since the advent of GPT, large language models (LLMs) have brought about revolutionary advancements in all walks of life. As a superior natural language processing (NLP) technology, LLMs have consistently achieved state-of-the-art performance on numerous areas. However, LLMs are considered to be general-purpose models for NLP tasks, which may encounter challenges when applied to complex tasks in specialized fields such as optical networks. In this study, we propose a framework of LLM-empowered optical networks, facilitating intelligent control of the physical layer and efficient interaction with the application layer through an LLM-driven agent (AI-Agent) deployed in the control layer. The AI-Agent can leverage external tools and extract domain knowledge from a comprehensive resource library specifically established for optical networks. This is achieved through user input and well-crafted prompts, enabling the generation of control instructions and result representations for autonomous operation and maintenance in optical networks. To improve LLM's capability in professional fields and stimulate its potential on complex tasks, the details of performing prompt engineering, establishing domain knowledge library, and implementing complex tasks are illustrated in this study. Moreover, the proposed framework is verified on two typical tasks: network alarm analysis and network performance optimization. The good response accuracies and sematic similarities of 2,400 test situations exhibit the great potential of LLM in optical networks.
- Workflow (0.93)
- Overview (0.93)
- Research Report > New Finding (0.86)
OpticGAI: Generative AI-aided Deep Reinforcement Learning for Optical Networks Optimization
Li, Siyuan, Lin, Xi, Liu, Yaju, Li, Gaolei, Li, Jianhua
Deep Reinforcement Learning (DRL) is regarded as a promising tool for optical network optimization. However, the flexibility and efficiency of current DRL-based solutions for optical network optimization require further improvement. Currently, generative models have showcased their significant performance advantages across various domains. In this paper, we introduce OpticGAI, the AI-generated policy design paradigm for optical networks. In detail, it is implemented as a novel DRL framework that utilizes generative models to learn the optimal policy network. Furthermore, we assess the performance of OpticGAI on two NP-hard optical network problems, Routing and Wavelength Assignment (RWA) and dynamic Routing, Modulation, and Spectrum Allocation (RMSA), to show the feasibility of the AI-generated policy paradigm. Simulation results have shown that OpticGAI achieves the highest reward and the lowest blocking rate of both RWA and RMSA problems. OpticGAI poses a promising direction for future research on generative AI-enhanced flexible optical network optimization.
- North America > United States > New York > New York County > New York City (0.14)
- Asia > China > Shanghai > Shanghai (0.06)
- Oceania > Australia > New South Wales > Sydney (0.05)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Generation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.72)
Multi-Step Traffic Prediction for Multi-Period Planning in Optical Networks
Maryam, Hafsa, Panayiotou, Tania, Ellinas, Georgios
A multi-period planning framework is proposed that exploits multi-step ahead traffic predictions to address service overprovisioning and improve adaptability to traffic changes, while ensuring the necessary quality-of-service (QoS) levels. An encoder-decoder deep learning model is initially leveraged for multi-step ahead prediction by analyzing real-traffic traces. This information is then exploited by multi-period planning heuristics to efficiently utilize available network resources while minimizing undesired service disruptions (caused due to lightpath re-allocations), with these heuristics outperforming a single-step ahead prediction approach. Network capacity demand is rapidly increasing, due to the emergence of new services and applications. To cope with this growing demand, the use of machine learning (ML) techniques for traffic-driven service provisioning has emerged as a promising solution to effectively model real-world traffic traces [1] and deal with overprovisioning that is present in staticallyprovisioned elastic optical networks (EONs) [2].