Goto

Collaborating Authors

 Cai, Dunbo


Split Federated Learning Over Heterogeneous Edge Devices: Algorithm and Optimization

arXiv.org Artificial Intelligence

Split Learning (SL) is a promising collaborative machine learning approach, enabling resource-constrained devices to train models without sharing raw data, while reducing computational load and preserving privacy simultaneously. However, current SL algorithms face limitations in training efficiency and suffer from prolonged latency, particularly in sequential settings, where the slowest device can bottleneck the entire process due to heterogeneous resources and frequent data exchanges between clients and servers. To address these challenges, we propose the Heterogeneous Split Federated Learning (HSFL) framework, which allows resource-constrained clients to train their personalized client-side models in parallel, utilizing different cut layers. Aiming to mitigate the impact of heterogeneous environments and accelerate the training process, we formulate a latency minimization problem that optimizes computational and transmission resources jointly. Additionally, we design a resource allocation algorithm that combines the Sample Average Approximation (SAA), Genetic Algorithm (GA), Lagrangian relaxation and Branch and Bound (B\&B) methods to efficiently solve this problem. Simulation results demonstrate that HSFL outperforms other frameworks in terms of both convergence rate and model accuracy on heterogeneous devices with non-iid data, while the optimization algorithm is better than other baseline methods in reducing latency.


On the Discovery and Utility of Precedence Constraints in Temporal Planning

AAAI Conferences

We extend the precedence constraints contexts heuristic (hpcc) to a temporal and numeric setting, and propose rules to account precedence constraints among comparison variables and logical variables. Experimental results on benchmark domains show that our extension has the potential to lead to better plan quality than that with the heuristic proposed by Eyerich and others.


Enhancing the Context-Enhanced Additive Heuristic with Precedence Constraints

AAAI Conferences

Recently, Helmert and Geffner proposed the context-enhanced additive heuristic, where fact costs are evaluated relative to context states that arise from achieving first a pivot condition of each operator. As Helmert and Geffner pointed out, the method can be generalized to consider contexts arising from arbitrary precedence constraints over operator conditions instead. Herein, we provide such a generalization. We extend Helmert and Geffner's equations, and discuss a number of design choices that arise. Drawing on previous work on goal orderings, we design a family of methods for automatically generating precedence constraints. We run large-scale experiments, showing that the technique can help significantly, depending on the choice of precedence constraints. We shed some light on this by profiling the behavior of all possible precedence constraints, using a sampling technique.