Ensuring Truthfulness in Distributed Aggregative Optimization

Chen, Ziqin, Egerstedt, Magnus, Wang, Yongqiang

arXiv.org Artificial Intelligence 

--Distributed aggregative optimization methods are gaining increased traction due to their ability to address cooperative control and optimization problems, where the objective function of each agent depends not only on its own decision variable but also on the aggregation of other agents' decision variables. Nevertheless, existing distributed aggregative optimization methods implicitly assume all agents to be truthful in information sharing, which can be unrealistic in real-world scenarios, where agents may act selfishly or strategically. In fact, an opportunistic agent may deceptively share false information in its own favor to minimize its own loss, which, however, will compromise the network-level global performance. T o solve this issue, we propose a new distributed aggregative optimization algorithm that can ensure truthfulness of agents and convergence performance. T o the best of our knowledge, this is the first algorithm that ensures truthfulness in a fully distributed setting, where no "centralized" aggregator exists to collect private information/decision variables from participating agents. We systematically characterize the convergence rate of our algorithm under nonconvex/convex/strongly convex objective functions, which generalizes existing distributed aggregative optimization results that only focus on convex objective functions. We also rigorously quantify the tradeoff between convergence performance and the level of enabled truthfulness under different convexity conditions. Numerical simulations using distributed charging of electric vehicles confirm the efficacy of our algorithm. Index T erms --Distributed aggregative optimization, joint differential privacy, truthfulness. Recently, there has been a surge of interest in distributed optimization which underpins numerous applications in cooperative control [1], [2], signal processing [3], and machine learning [4]. In distributed optimization, a group of agents cooperatively learns a common decision variable that minimizes a global objective function that is the sum of individual agents' objective functions. The work was supported in part by the National Science Foundation under Grants ECCS-1912702, CCF-2106293, CCF-2215088, CNS-2219487, and CCF-2334449. Ziqin Chen and Y ongqiang Wang are with the Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 USA and Magnus Egerstedt is with the Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA 92697 USA. To solve problem (1), several gradient-tracking-based algorithms have been proposed for strongly convex objective functions [5]-[11] and convex objective functions [12]-[15]. Recently, some results have also been reported for nonconvex objective functions [16], [17].