How Projected Gradient Descent works in Machine Learning pipelines part1

#artificialintelligence 

Abstract: This paper addresses a distributed convex optimization problem with a class of coupled constraints, which arise in a multi-agent system composed of multiple communities modeled by cliques. First, we propose a fully distributed gradient-based algorithm with a novel operator inspired by the convex projection, called the clique-based projection. Next, we scrutinize the convergence properties for both diminishing and fixed step sizes. For diminishing ones, we show the convergence to an optimal solution under the assumptions of the smoothness of an objective function and the compactness of the constraint set. Additionally, when the objective function is strongly monotone, the strict convergence to the unique solution is proved without the assumption of compactness.