Goto

Collaborating Authors

 regular agent


Containment Control Approach for Steering Opinion in a Social Network

Rastgoftar, Hossein

arXiv.org Artificial Intelligence

The paper studies the problem of steering multi-dimensional opinion in a social network. Assuming the society of desire consists of stubborn and regular agents, stubborn agents are considered as leaders who specify the desired opinion distribution as a distributed reward or utility function. In this context, each regular agent is seen as a follower, updating its bias on the initial opinion and influence weights by averaging their observations of the rewards their influencers have received. Assuming random graphs with reducible and irreducible topology specify the influences on regular agents, opinion evolution is represented as a containment control problem in which stability and convergence to the final opinion are proven.


Byzantine-Robust Decentralized Stochastic Optimization with Stochastic Gradient Noise-Independent Learning Error

Peng, Jie, Li, Weiyu, Ling, Qing

arXiv.org Artificial Intelligence

This paper studies Byzantine-robust stochastic optimization over a decentralized network, where every agent periodically communicates with its neighbors to exchange local models, and then updates its own local model by stochastic gradient descent (SGD). The performance of such a method is affected by an unknown number of Byzantine agents, which conduct adversarially during the optimization process. To the best of our knowledge, there is no existing work that simultaneously achieves a linear convergence speed and a small learning error. We observe that the learning error is largely dependent on the intrinsic stochastic gradient noise. Motivated by this observation, we introduce two variance reduction methods, stochastic average gradient algorithm (SAGA) and loopless stochastic variance-reduced gradient (LSVRG), to Byzantine-robust decentralized stochastic optimization for eliminating the negative effect of the stochastic gradient noise. The two resulting methods, BRAVO-SAGA and BRAVO-LSVRG, enjoy both linear convergence speeds and stochastic gradient noise-independent learning errors. Such learning errors are optimal for a class of methods based on total variation (TV)-norm regularization and stochastic subgradient update. We conduct extensive numerical experiments to demonstrate their effectiveness under various Byzantine attacks.


On the Geometric Convergence of Byzantine-Resilient Distributed Optimization Algorithms

Kuwaranancharoen, Kananart, Sundaram, Shreyas

arXiv.org Artificial Intelligence

The problem of designing distributed optimization algorithms that are resilient to Byzantine adversaries has received significant attention. For the Byzantine-resilient distributed optimization problem, the goal is to (approximately) minimize the average of the local cost functions held by the regular (non adversarial) agents in the network. In this paper, we provide a general algorithmic framework for Byzantine-resilient distributed optimization which includes some state-of-the-art algorithms as special cases. We analyze the convergence of algorithms within the framework, and derive a geometric rate of convergence of all regular agents to a ball around the optimal solution (whose size we characterize). Furthermore, we show that approximate consensus can be achieved geometrically fast under some minimal conditions. Our analysis provides insights into the relationship among the convergence region, distance between regular agents' values, step-size, and properties of the agents' functions for Byzantine-resilient distributed optimization.


Competition-Based Resilience in Distributed Quadratic Optimization

Ballotta, Luca, Como, Giacomo, Shamma, Jeff S., Schenato, Luca

arXiv.org Artificial Intelligence

This paper proposes a novel approach to resilient distributed optimization with quadratic costs in a networked control system (e.g., wireless sensor network, power grid, robotic team) prone to external attacks (e.g., hacking, power outage) that cause agents to misbehave. Departing from classical filtering strategies proposed in literature, we draw inspiration from a game-theoretic formulation of the consensus problem and argue that adding competition to the mix can enhance resilience in the presence of malicious agents. Our intuition is corroborated by analytical and numerical results showing that i) our strategy highlights the presence of a nontrivial tradeoff between blind collaboration and full competition, and ii) such competition-based approach can outperform state-of-the-art algorithms based on Mean Subsequence Reduced.


Byzantine-Robust Decentralized Stochastic Optimization over Static and Time-Varying Networks

Peng, Jie, Ling, Qing

arXiv.org Machine Learning

In this paper, we consider the Byzantine-robust stochastic optimization problem defined over decentralized static and time-varying networks, where the agents collaboratively minimize the summation of expectations of stochastic local cost functions, but some of the agents are unreliable due to data corruptions, equipment failures or cyber-attacks. The unreliable agents, which are called as Byzantine agents thereafter, can send faulty values to their neighbors and bias the optimization process. Our key idea to handle the Byzantine attacks is to formulate a total variation (TV) norm-penalized approximation of the Byzantine-free problem, where the penalty term forces the local models of regular agents to be close, but also allows the existence of outliers from the Byzantine agents. A stochastic subgradient method is applied to solve the penalized problem. We prove that the proposed method reaches a neighborhood of the Byzantine-free optimal solution, and the size of neighborhood is determined by the number of Byzantine agents and the network topology. Numerical experiments corroborate the theoretical analysis, as well as demonstrate the robustness of the proposed method to Byzantine attacks and its superior performance comparing to existing methods.