High Efficiency Inference Accelerating Algorithm for NOMA-based Mobile Edge Computing
Yuan, Xin, Li, Ning, Zhang, Tuo, Li, Muqing, Chen, Yuwen, Ortega, Jose Fernan Martinez, Guo, Song
–arXiv.org Artificial Intelligence
-- Splitting the inference model between device, edge server, and cloud can improve the performance of EI greatly. Additionally, the non - orthogonal multiple access (NOMA), which is the key supporting technologies of B5G/6G, ca n achieve massive connections and high spectrum efficiency. Motivated by the benefits of NOMA, integrating NOMA with model split in MEC to reduce the inference latency further becomes attractive. However, the NOMA based communication during split inference has not been properly considered in previous works. Therefore, in this paper, we integrate the NOMA into split inference in MEC, and p ropose the effective communication and computing resource allocation algorithm to accelerat e the model inference at edge . Specifically, when the mobile user has a large model inference task needed to be calculated in the NOMA - based MEC, it will take the energy consumption of both device and edge server and the inference latency into account to find the optimal model split s trategy, subchannel allocation strategy (uplink and downlink), and transmission power allocation strategy (uplink and downlink). Since the minimum inference delay and energy consumption cannot be satisfied simultaneously, and the variables of subchannel al location and model split are discrete, the gradient descent (GD) algorithm is adopted to find the optimal tradeoff between them. Moreover, the loop iteration GD approach (Li - GD) is proposed to reduce the complexity of GD algorithm that caused by the parame ter discrete. Additionally, the properties of the proposed algorithm are also investigated, which demonstrate the effectiveness of the proposed algorithms. The artificial intelligence has been widely used and changed our life greatly, such as metaverse [1 - 2], auto matic driving [2 - 4], image generation [5], etc. However, since the AI model is always large for achieving high accuracy, the computing resource that needed for these models are huge. Therefore, it is inappropriate to deploy these AI models on the mobile de vices, such as mobile phones and vehicles, in which the computing resource is quite limited.
arXiv.org Artificial Intelligence
Dec-25-2023
- Country:
- Asia > China (0.46)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Networks (0.35)
- Telecommunications (0.89)
- Technology: