computing capability
SFL-LEO: Asynchronous Split-Federated Learning Design for LEO Satellite-Ground Network Framework
Wu, Jiasheng, Zhang, Jingjing, Lin, Zheng, Chen, Zhe, Wang, Xiong, Zhu, Wenjun, Gao, Yue
Recently, the rapid development of LEO satellite networks spurs another widespread concern-data processing at satellites. However, achieving efficient computation at LEO satellites in highly dynamic satellite networks is challenging and remains an open problem when considering the constrained computation capability of LEO satellites. For the first time, we propose a novel distributed learning framework named SFL-LEO by combining Federated Learning (FL) with Split Learning (SL) to accommodate the high dynamics of LEO satellite networks and the constrained computation capability of LEO satellites by leveraging the periodical orbit traveling feature. The proposed scheme allows training locally by introducing an asynchronous training strategy, i.e., achieving local update when LEO satellites disconnect with the ground station, to provide much more training space and thus increase the training performance. Meanwhile, it aggregates client-side sub-models at the ground station and then distributes them to LEO satellites by borrowing the idea from the federated learning scheme. Experiment results driven by satellite-ground bandwidth measured in Starlink demonstrate that SFL-LEO provides a similar accuracy performance with the conventional SL scheme because it can perform local training even within the disconnection duration.
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
FedSplitX: Federated Split Learning for Computationally-Constrained Heterogeneous Clients
Shin, Jiyun, Ahn, Jinhyun, Kang, Honggu, Kang, Joonhyuk
Foundation models (FMs) have demonstrated remarkable performance in machine learning but demand extensive training data and computational resources. Federated learning (FL) addresses the challenges posed by FMs, especially related to data privacy and computational burdens. However, FL on FMs faces challenges in situations with heterogeneous clients possessing varying computing capabilities, as clients with limited capabilities may struggle to train the computationally intensive FMs. To address these challenges, we propose FedSplitX, a novel FL framework that tackles system heterogeneity. FedSplitX splits a large model into client-side and server-side components at multiple partition points to accommodate diverse client capabilities. This approach enables clients to collaborate while leveraging the server's computational power, leading to improved model performance compared to baselines that limit model size to meet the requirement of the poorest client. Furthermore, FedSplitX incorporates auxiliary networks at each partition point to reduce communication costs and delays while enhancing model performance. Our experiments demonstrate that FedSplitX effectively utilizes server capabilities to train large models, outperforming baseline approaches.
Split Learning in 6G Edge Networks
Lin, Zheng, Qu, Guanqiao, Chen, Xianhao, Huang, Kaibin
With the proliferation of distributed edge computing resources, the 6G mobile network will evolve into a network for connected intelligence. Along this line, the proposal to incorporate federated learning into the mobile edge has gained considerable interest in recent years. However, the deployment of federated learning faces substantial challenges as massive resource-limited IoT devices can hardly support on-device model training. This leads to the emergence of split learning (SL) which enables servers to handle the major training workload while still enhancing data privacy. In this article, we offer a brief overview of key advancements in SL and articulate its seamless integration with wireless edge networks. We begin by illustrating the tailored 6G architecture to support edge SL. Then, we examine the critical design issues for edge SL, including innovative resource-efficient learning frameworks and resource management strategies under a single edge server. Additionally, we expand the scope to multi-edge scenarios, exploring multi-edge collaboration and mobility management from a networking perspective. Finally, we discuss open problems for edge SL, including convergence analysis, asynchronous SL and U-shaped SL.
- Research Report (0.64)
- Overview (0.48)
- Telecommunications (1.00)
- Information Technology > Security & Privacy (0.86)
Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks
Lin, Zheng, Zhu, Guangyu, Deng, Yiqin, Chen, Xianhao, Gao, Yue, Huang, Kaibin, Fang, Yuguang
The increasingly deeper neural networks hinder the democratization of privacy-enhancing distributed learning, such as federated learning (FL), to resource-constrained devices. To overcome this challenge, in this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL), allowing multiple client devices to offload substantial training workloads to an edge server via layer-wise model split. By observing that existing PSL schemes incur excessive training latency and large volume of data transmissions, we propose an innovative PSL framework, namely, efficient parallel split learning (EPSL), to accelerate model training. To be specific, EPSL parallelizes client-side model training and reduces the dimension of local gradients for back propagation (BP) via last-layer gradient aggregation, leading to a significant reduction in server-side training and communication latency. Moreover, by considering the heterogeneous channel conditions and computing capabilities at client devices, we jointly optimize subchannel allocation, power control, and cut layer selection to minimize the per-round latency. Simulation results show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy compared with the state-of-the-art benchmarks, and the tailored resource management and layer split strategy can considerably reduce latency than the counterpart without optimization.
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > China > Shandong Province (0.04)
- Asia > China > Hong Kong > Kowloon (0.04)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.93)
Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks
Ponomarenko-Timofeev, Aleksei, Galinina, Olga, Balakrishnan, Ravikumar, Himayat, Nageen, Andreev, Sergey, Koucheryavy, Yevgeni
Federated systems enable collaborative training on highly heterogeneous data through model personalization, which can be facilitated by employing multi-task learning algorithms. However, significant variation in device computing capabilities may result in substantial degradation in the convergence rate of training. To accelerate the learning procedure for diverse participants in a multi-task federated setting, more efficient and robust methods need to be developed. In this paper, we design an efficient iterative distributed method based on the alternating direction method of multipliers (ADMM) for support vector machines (SVMs), which tackles federated classification and regression. The proposed method utilizes efficient computations and model exchange in a network of heterogeneous nodes and allows personalization of the learning model in the presence of non-i.i.d. data. To further enhance privacy, we introduce a random mask procedure that helps avoid data inversion. Finally, we analyze the impact of the proposed privacy mechanisms and participant hardware and data heterogeneity on the system performance.
Asynchronous Federated Learning for Edge-assisted Vehicular Networks
Wang, Siyuan, Wu, Qiong, Fan, Qiang, Fan, Pingyi, Wang, Jiangzhou
Vehicular networks enable vehicles support real-time vehicular applications through training data. Due to the limited computing capability, vehicles usually transmit data to a road side unit (RSU) at the network edge to process data. However, vehicles are usually reluctant to share data with each other due to the privacy issue. For the traditional federated learning (FL), vehicles train the data locally to obtain a local model and then upload the local model to the RSU to update the global model, thus the data privacy can be protected through sharing model parameters instead of data. The traditional FL updates the global model synchronously, i.e., the RSU needs to wait for all vehicles to upload their models for the global model updating. However, vehicles may usually drive out of the coverage of the RSU before they obtain their local models through training, which reduces the accuracy of the global model. It is necessary to propose an asynchronous federated learning (AFL) to solve this problem, where the RSU updates the global model once it receives a local model from a vehicle. However, the amount of data, computing capability and vehicle mobility may affect the accuracy of the global model. In this paper, we jointly consider the amount of data, computing capability and vehicle mobility to design an AFL scheme to improve the accuracy of the global model. Extensive simulation experiments have demonstrated that our scheme outperforms the FL scheme
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (3 more...)
On-Demand Resource Management for 6G Wireless Networks Using Knowledge-Assisted Dynamic Neural Networks
Ma, Longfei, Cheng, Nan, Wang, Xiucheng, Sun, Ruijin, Lu, Ning
On-demand service provisioning is a critical yet challenging issue in 6G wireless communication networks, since emerging services have significantly diverse requirements and the network resources become increasingly heterogeneous and dynamic. In this paper, we study the on-demand wireless resource orchestration problem with the focus on the computing delay in orchestration decision-making process. Specifically, we take the decision-making delay into the optimization problem. Then, a dynamic neural network (DyNN)-based method is proposed, where the model complexity can be adjusted according to the service requirements. We further build a knowledge base representing the relationship among the service requirements, available computing resources, and the resource allocation performance. By exploiting the knowledge, the width of DyNN can be selected in a timely manner, further improving the performance of orchestration. Simulation results show that the proposed scheme significantly outperforms the traditional static neural network, and also shows sufficient flexibility in on-demand service provisioning.
- North America > Canada > Ontario (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
New algorithms protect against quantum computing threats - IoT Times
According to a 2019 article in the Financial Times, a quantum computer built by Google could perform a calculation "in three minutes and 20 seconds that would take today's most advanced classical computer … approximately 10,000 years". It has been said that a fully capable quantum computer could break the most robust encryption in minutes, rendering current internet security useless. While prototypes of so-called quantum computers exist, developed by companies ranging from IBM to D-Wave, they can only perform the same tasks classical computers can, albeit quicker. Those technologies' speed of development is languid compared to the rest of the computing industry. The most significant challenge quantum computing faces today is the need to hold qubits in near absolute zero temperatures to keep them stable.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Artificial Intelligence (1.00)
Baidu Research: 10 Technology Trends in 2021 - KDnuggets
While global economic and social uncertainties in 2020 caused significant stress, progress in intelligent technologies continued. The digital and intelligent transformation of all industries significantly accelerated, with AI technologies showing great potential in combatting COVID-19 and helping people resume work. Understanding future technology trends may never have been as important as it is today. Baidu Research is releasing our prediction of the 10 technology trends in 2021, hoping that these clear technology signposts will guide us to embrace the new opportunities and embark on new journeys in the age of intelligence. In 2020, COVID-19 drove the integration of AI and emerging technologies like 5G, big data, and IoT.
China Stretches Another AI Framework To Exascale
The nexus of traditional high performance computing and artificial intelligence is a fact, not a theory, and the exascale-class machinery installed in the United States, Europe, China, and Japan will be a showcase for how these two powerful simulation and analytical prediction techniques can be brought together in many different ways. A year ago, we wrote about some benchmarks done in China with the Tianhe-3 exascale prototype supercomputer running on custom native many-core Armv8-based Phytium 2000 processors. Now comes yet another research paper from more than a dozen scientists from multiple universities in China laying out a hybrid AI-HPC framework on the next-generation exascale Sunway system, the follow-on to the Sunway "TaihuLight" supercomputer that now sits at number four on the Top500 list of the world's fastest systems, combined with innovative neural network designs and deep learning principles to enable researchers to solve massive and highly complex problems. This effort referred to above is distinct from the BaGuaLu machine learning model that we covered back in March, which spanned 37.44 million cores and that juggled 14.5 trillion parameters. In this new AI-HPC mashup run on OceanLight, the challenge was what is called quantum many-body problems, which occur when large numbers of microscopic particles interact with each other, creating a quantum entanglement and resulting in a range of physical phenomena.
- Asia > China (1.00)
- North America > United States (0.35)
- Europe (0.25)
- Asia > Japan (0.25)