mec server
A Novel Deep Reinforcement Learning Method for Computation Offloading in Multi-User Mobile Edge Computing with Decentralization
Long, Nguyen Chi, Van Chien, Trinh, Tung, Ta Hai, Nguyen, Van Son, Hoang, Trong-Minh, Dang, Nguyen Ngoc Hai
Mobile edge computing (MEC) allows appliances to offload workloads to neighboring MEC servers that have the potential for computation-intensive tasks with limited computational capabilities. This paper studied how deep reinforcement learning (DRL) algorithms are used in an MEC system to find feasible decentralized dynamic computation offloading strategies, which leads to the construction of an extensible MEC system that operates effectively with finite feedback. Even though the Deep Deterministic Policy Gradient (DDPG) algorithm, subject to their knowledge of the MEC system, can be used to allocate powers of both computation offloading and local execution, to learn a computation offloading policy for each user independently, we realized that this solution still has some inherent weaknesses. Hence, we introduced a new approach for this problem based on the Twin Delayed DDPG algorithm, which enables us to overcome this proneness and investigate cases where mobile users are portable. Numerical results showed that individual users can autonomously learn adequate policies through the proposed approach. Besides, the performance of the suggested solution exceeded the conventional DDPG-based power control strategy.
- Asia > Vietnam > Hanoi > Hanoi (0.05)
- North America > United States (0.04)
- Europe > Hungary > Hajdú-Bihar County > Debrecen (0.04)
Cooperative Task Offloading through Asynchronous Deep Reinforcement Learning in Mobile Edge Computing for Future Networks
Liu, Yuelin, Li, Haiyuan, Vasilakos, Xenofon, Hussain, Rasheed, Simeonidou, Dimitra
Cooperative Task Offloading through Asynchronous Deep Reinforcement Learning in Mobile Edge Computing for Future Networks Y uelin Liu, Haiyuan Li, Xenofon V asilakos, Rasheed Hussain, and Dimitra Simeonidou High Performance Networks (HPN) Research Group, Smart Internet Lab, University of Bristol, Bristol, UK Email: { name }. {surname}@bristol.ac.uk Abstract --Future networks (including 6G) are poised to accelerate the realisation of Internet of Everything. The latter will imply a high demand for computational resources to support new services. Mobile Edge Computing (MEC) is a promising solution that enables offloading computation-intensive tasks to nearby edge servers from the end-user devices, thereby reducing latency and energy consumption . Nevertheless, relying solely on a single MEC server for task offloading can lead to uneven resource utilisation and suboptimal performance in complex scenarios. Additionally, traditional task offloading strategies specialise in centralised policy decisions, which unavoidably entails extreme transmission latency and reach computational bottleneck. T o address these gaps, we propose a latency-efficient and energy-efficient Cooperative T ask Offloading framework with Transformer-driven Prediction (CTO-TP), leveraging asynchronous multi-agent deep reinforcement learning to address these challenges. This approach fosters edge-edge cooperation and decreases the synchronous waiting time by performing asynchronous training, optimis-ing task offloading, and resource allocation across distributed networks. The performance evaluation demonstrates that the proposed CTO-TP algorithm reduces up to 80% overall system latency and 87% energy consumption compared to the baseline schemes.
Mobility-aware Seamless Service Migration and Resource Allocation in Multi-edge IoV Systems
Chen, Zheyi, Huang, Sijin, Min, Geyong, Ning, Zhaolong, Li, Jie, Zhang, Yan
Abstract--Mobile Edge Computing (MEC) offers low-latency and high-bandwidth support for Internet-of-Vehicles (IoV) applications. However, due to high vehicle mobility and finite communication coverage of base stations, it is hard to maintain uninterrupted and high-quality services without proper service migration among MEC servers. Existing solutions commonly rely on prior knowledge and rarely consider efficient resource allocation during the service migration process, making it hard to reach optimal performance in dynamic IoV environments. To address these important challenges, we propose SR-CL, a novel mobility-aware seamless Service migration and Resource allocation framework via Convex-optimization-enabled deep reinforcement Learning in multi-edge IoV systems. First, we decouple the Mixed Integer Nonlinear Programming (MINLP) problem of service migration and resource allocation into two sub-problems. Next, we design a new actor-critic-based asynchronous-update deep reinforcement learning method to handle service migration, where the delayed-update actor makes migration decisions and the one-step-update critic evaluates the decisions to guide the policy update. Notably, we theoretically derive the optimal resource allocation with convex optimization for each MEC server, thereby further improving system performance. Using the real-world datasets of vehicle trajectories and testbed, extensive experiments are conducted to verify the effectiveness of the proposed SR-CL. Compared to benchmark methods, the SR-CL achieves superior convergence and delay performance under various scenarios. However, the real-time demands of IoV applications pose When vehicles offload tasks, MEC servers create dedicated significant challenges for onboard processors with limited service instances via virtualization techniques for the vehicles computational capabilities [2]. Although Cloud Computing and allocate proper resources to them [7].
- Asia > China > Fujian Province (0.28)
- Europe > Norway (0.14)
- Europe > United Kingdom > England > Devon > Exeter (0.14)
- (2 more...)
- Telecommunications (1.00)
- Information Technology (1.00)
- Energy > Oil & Gas (0.46)
Intelligent Task Offloading: Advanced MEC Task Offloading and Resource Management in 5G Networks
Ebrahimi, Alireza, Afghah, Fatemeh
5G technology enhances industries with high-speed, reliable, low-latency communication, revolutionizing mobile broadband and supporting massive IoT connectivity. With the increasing complexity of applications on User Equipment (UE), offloading resource-intensive tasks to robust servers is essential for improving latency and speed. The 3GPP's Multi-access Edge Computing (MEC) framework addresses this challenge by processing tasks closer to the user, highlighting the need for an intelligent controller to optimize task offloading and resource allocation. This paper introduces a novel methodology to efficiently allocate both communication and computational resources among individual UEs. Our approach integrates two critical 5G service imperatives: Ultra-Reliable Low Latency Communication (URLLC) and Massive Machine Type Communication (mMTC), embedding them into the decision-making framework. Central to this approach is the utilization of Proximal Policy Optimization, providing a robust and efficient solution to the challenges posed by the evolving landscape of 5G technology. The proposed model is evaluated in a simulated 5G MEC environment. The model significantly reduces processing time by 4% for URLLC users under strict latency constraints and decreases power consumption by 26% for mMTC users, compared to existing baseline models based on the reported simulation results. These improvements showcase the model's adaptability and superior performance in meeting diverse QoS requirements in 5G networks.
- North America > United States (0.05)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.04)
- Telecommunications (1.00)
- Information Technology (1.00)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.73)
Energy Optimization of Multi-task DNN Inference in MEC-assisted XR Devices: A Lyapunov-Guided Reinforcement Learning Approach
Sun, Yanzan, Qiu, Jiacheng, Pan, Guangjin, Xu, Shugong, Zhang, Shunqing, Wang, Xiaoyun, Han, Shuangfeng
Extended reality (XR), blending virtual and real worlds, is a key application of future networks. While AI advancements enhance XR capabilities, they also impose significant computational and energy challenges on lightweight XR devices. In this paper, we developed a distributed queue model for multi-task DNN inference, addressing issues of resource competition and queue coupling. In response to the challenges posed by the high energy consumption and limited resources of XR devices, we designed a dual time-scale joint optimization strategy for model partitioning and resource allocation, formulated as a bi-level optimization problem. This strategy aims to minimize the total energy consumption of XR devices while ensuring queue stability and adhering to computational and communication resource constraints. To tackle this problem, we devised a Lyapunov-guided Proximal Policy Optimization algorithm, named LyaPPO. Numerical results demonstrate that the LyaPPO algorithm outperforms the baselines, achieving energy conservation of 24.79% to 46.14% under varying resource capacities. Specifically, the proposed algorithm reduces the energy consumption of XR devices by 24.29% to 56.62% compared to baseline algorithms.
Privacy-Aware Multi-Device Cooperative Edge Inference with Distributed Resource Bidding
Mobile edge computing (MEC) has empowered mobile devices (MDs) in supporting artificial intelligence (AI) applications through collaborative efforts with proximal MEC servers. Unfortunately, despite the great promise of device-edge cooperative AI inference, data privacy becomes an increasing concern. In this paper, we develop a privacy-aware multi-device cooperative edge inference system for classification tasks, which integrates a distributed bidding mechanism for the MEC server's computational resources. Intermediate feature compression is adopted as a principled approach to minimize data privacy leakage. To determine the bidding values and feature compression ratios in a distributed fashion, we formulate a decentralized partially observable Markov decision process (DEC-POMDP) model, for which, a multi-agent deep deterministic policy gradient (MADDPG)-based algorithm is developed. Simulation results demonstrate the effectiveness of the proposed algorithm in privacy-preserving cooperative edge inference. Specifically, given a sufficient level of data privacy protection, the proposed algorithm achieves 0.31-0.95% improvements in classification accuracy compared to the approach being agnostic to the wireless channel conditions. The performance is further enhanced by 1.54-1.67% by considering the difficulties of inference data.
- Asia > China > Hong Kong (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
To Train or Not to Train: Balancing Efficiency and Training Cost in Deep Reinforcement Learning for Mobile Edge Computing
Boscaro, Maddalena, Mason, Federico, Chiariotti, Federico, Zanella, Andrea
Artificial Intelligence (AI) is a key component of 6G networks, as it enables communication and computing services to adapt to end users' requirements and demand patterns. The management of Mobile Edge Computing (MEC) is a meaningful example of AI application: computational resources available at the network edge need to be carefully allocated to users, whose jobs may have different priorities and latency requirements. The research community has developed several AI algorithms to perform this resource allocation, but it has neglected a key aspect: learning is itself a computationally demanding task, and considering free training results in idealized conditions and performance in simulations. In this work, we consider a more realistic case in which the cost of learning is specifically accounted for, presenting a new algorithm to dynamically select when to train a Deep Reinforcement Learning (DRL) agent that allocates resources. Our method is highly general, as it can be directly applied to any scenario involving a training overhead, and it can approach the same performance as an ideal learning agent even under realistic training conditions.
Federated Learning for Zero-Day Attack Detection in 5G and Beyond V2X Networks
korba, Abdelaziz Amara, Boualouache, Abdelwahab, Brik, Bouziane, Rahal, Rabah, Ghamri-Doudane, Yacine, Senouci, Sidi Mohammed
Deploying Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) makes them vulnerable to increasing vectors of security and privacy attacks. In this context, a wide range of advanced machine/deep learning based solutions have been designed to accurately detect security attacks. Specifically, supervised learning techniques have been widely applied to train attack detection models. However, the main limitation of such solutions is their inability to detect attacks different from those seen during the training phase, or new attacks, also called zero-day attacks. Moreover, training the detection model requires significant data collection and labeling, which increases the communication overhead, and raises privacy concerns. To address the aforementioned limits, we propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern. Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs privacy, and minimizing the communication overhead. The in-depth experiment on a recent network traffic dataset shows that the proposed system achieved a high detection rate while minimizing the false positive rate, and the detection delay.
- Africa > Middle East > Algeria > Annaba Province > Annaba (0.04)
- Europe > France > Bourgogne-Franche-Comté (0.04)
Exploring 6G Potential for Industrial Digital Twinning and Swarm Intelligence in Obstacle-Rich Environments
Yuan, Siyu, Alam, Khurshid, Han, Bin, Krummacker, Dennis, Schotten, Hans D.
With the advent of 6G technology, the demand for efficient and intelligent systems in industrial applications has surged, driving the need for advanced solutions in target localization. Utilizing swarm robots to locate unknown targets involves navigating increasingly complex environments. Digital Twinning (DT) offers a robust solution by creating a virtual replica of the physical world, which enhances the swarm's navigation capabilities. Our framework leverages DT and integrates Swarm Intelligence to store physical map information in the cloud, enabling robots to efficiently locate unknown targets. The simulation results demonstrate that the DT framework, augmented by Swarm Intelligence, significantly improves target location efficiency in obstacle-rich environments compared to traditional methods. This research underscores the potential of combining DT and Swarm Intelligence to advance the field of robotic navigation and target localization in complex industrial settings.
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- (2 more...)
Computation Offloading for Multi-server Multi-access Edge Vehicular Networks: A DDQN-based Method
Wang, Siyu, Yang, Bo, Yu, Zhiwen, Cao, Xuelin, Zhang, Yan, Yuen, Chau
Abstract--In this paper, we investigate a multi-user offloading problem in the overlapping domain of a multi-server mobile edge computing system. We divide the original problem into two stages: the offloading decision making stage and the request scheduling stage. To prevent the terminal from going out of service area during offloading, we consider the mobility parameter of the terminal according to the human behaviour model when making the offloading decision, and then introduce a server evaluation mechanism based on both the mobility parameter and the server load to select the optimal offloading server. In order to fully utilise the server resources, we design a double deep Q-network (DDQN)-based reward evaluation algorithm that considers the priority of tasks when scheduling offload requests. The authors of [3] proposed an effective task scheduling algorithm based on dynamic priority, which significantly reduced With the development of Multi-access Edge Computing task completion time and improved QoS. In [4], the authors (MEC) technology, MEC servers are moving closer to the proposed a hybrid task offloading scheme based on deep reinforcement terminal devices (TDs), which can be served more efficiently learning that achieved vehicle-to-edge and vehicleto-vehicle as the transmission latency is greatly reduced [1].
- Europe > Norway > Eastern Norway > Oslo (0.04)
- Asia > Singapore (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- (5 more...)