local objective
Correlation clustering with local objectives
Correlation Clustering is a powerful graph partitioning model that aims to cluster items based on the notion of similarity between items. An instance of the Correlation Clustering problem consists of a graph G (not necessarily complete) whose edges are labeled by a binary classifier as similar and dissimilar. Classically, we are tasked with producing a clustering that minimizes the number of disagreements: an edge is in disagreement if it is a similar edge and is present across clusters or if it is a dissimilar edge and is present within a cluster. Define the disagreements vector to be an n dimensional vector indexed by the vertices, where the v-th index is the number of disagreements at vertex v. Recently, Puleo and Milenkovic (ICML '16) initiated the study of the Correlation Clustering framework in which the objectives were more general functions of the disagreements vector. In this paper, we study algorithms for minimizing \ell q norm of the disagreements vector on arbitrary graphs and also provide an improved algorithm for minimizing the \ell_q norm (q >= 1) of the disagreements vector on complete graphs. We also study an alternate cluster-wise local objective introduced by Ahmadi, Khuller and Saha (IPCO '19), which aims to minimize the maximum number of disagreements associated with a cluster. We present an improved (2 + \eps) approximation algorithm for this objective.
Distributed Zero-Order Optimization under Adversarial Noise
We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada (0.04)
Review for NeurIPS paper: Personalized Federated Learning with Moreau Envelopes
Weaknesses: I am not completely convinced by the theoretical results. Specifically, I am not sure that Theorems 1 and 2 prove the right notion of convergence. Conceptually, I think you want to show that sum_i f_i(theta_i) is small and/or sum_i f_i(w) is small. The theorem statements, I suppose, upper bound sum_i f_i(theta_i), but \ w - theta_i \ is involved, and I don't see why that quantity should be relevant. I don't really see any reason to care about \ w - theta_i \; so what if you need to move far away from w to optimally personalize for one particular objective? In short, I think that the proposed objective pFedMe makes sense as a *training/surrogate objective*, but does not make as much sense as a criterion for evaluating a model, if that makes sense.
Reviews: Correlation clustering with local objectives
This paper studies a number of versions of the correlation clustering problem. One is given a (not-necessarily complete) graph G(V,E,E-) with positive and negative edges, and seeks a partition of the vertices that minimizes the \ell_q norm of the "disagreement vector" --- that is, of the vector having in position v (for v \in V) the number of positive neighbors of v that are not in v's cluster, plus the number of negative neighbors of v that are in v's cluster. The usual correlation clustering problem minimizes the \ell_1 norm of the vector (that is, the total number of "mistakes" made by the partition). The \ell_q generalization of the correlation clustering problem was introduced by Puleo et al (2016). Currently, the best algorithms known (i) for complete graphs, general q, is a 7-approximation, (ii) for general graphs, q \infty, is a O(\sqrt{n}) approximation.
Distributed Zero-Order Optimization under Adversarial Noise
We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors.
Correlation clustering with local objectives
Correlation Clustering is a powerful graph partitioning model that aims to cluster items based on the notion of similarity between items. An instance of the Correlation Clustering problem consists of a graph G (not necessarily complete) whose edges are labeled by a binary classifier as similar and dissimilar. Classically, we are tasked with producing a clustering that minimizes the number of disagreements: an edge is in disagreement if it is a similar edge and is present across clusters or if it is a dissimilar edge and is present within a cluster. Define the disagreements vector to be an n dimensional vector indexed by the vertices, where the v-th index is the number of disagreements at vertex v. Recently, Puleo and Milenkovic (ICML '16) initiated the study of the Correlation Clustering framework in which the objectives were more general functions of the disagreements vector. In this paper, we study algorithms for minimizing \ellq norms (q 1) of the disagreements vector for both arbitrary and complete graphs.
Federated $\mathcal{X}$-armed Bandit with Flexible Personalisation
Arabzadeh, Ali, Grant, James A., Leslie, David S.
This paper introduces a novel approach to personalised federated learning within the $\mathcal{X}$-armed bandit framework, addressing the challenge of optimising both local and global objectives in a highly heterogeneous environment. Our method employs a surrogate objective function that combines individual client preferences with aggregated global knowledge, allowing for a flexible trade-off between personalisation and collective learning. We propose a phase-based elimination algorithm that achieves sublinear regret with logarithmic communication overhead, making it well-suited for federated settings. Theoretical analysis and empirical evaluations demonstrate the effectiveness of our approach compared to existing methods. Potential applications of this work span various domains, including healthcare, smart home devices, and e-commerce, where balancing personalisation with global insights is crucial.
- Europe > United Kingdom (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Security & Privacy (0.46)
- Information Technology > Services (0.34)
ICRA Roboethics Challenge 2023: Intelligent Disobedience in an Elderly Care Home
Paster, Sveta, Rogers, Kantwon, Briggs, Gordon, Stone, Peter, Mirsky, Reuth
With the projected surge in the elderly population, service robots offer a promising avenue to enhance their well-being in elderly care homes. Such robots will encounter complex scenarios which will require them to perform decisions with ethical consequences. In this report, we propose to leverage the Intelligent Disobedience framework in order to give the robot the ability to perform a deliberation process over decisions with potential ethical implications. We list the issues that this framework can assist with, define it formally in the context of the specific elderly care home scenario, and delineate the requirements for implementing an intelligently disobeying robot. We conclude this report with some critical analysis and suggestions for future work.
- North America > United States > Texas > Travis County > Austin (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Health & Medicine > Therapeutic Area (0.47)
- Health & Medicine > Consumer Health (0.46)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.69)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.68)
Personalized Federated X -armed Bandit
Li, Wenjie, Song, Qifan, Honorio, Jean
In this work, we study the personalized federated $\mathcal{X}$-armed bandit problem, where the heterogeneous local objectives of the clients are optimized simultaneously in the federated learning paradigm. We propose the \texttt{PF-PNE} algorithm with a unique double elimination strategy, which safely eliminates the non-optimal regions while encouraging federated collaboration through biased but effective evaluations of the local objectives. The proposed \texttt{PF-PNE} algorithm is able to optimize local objectives with arbitrary levels of heterogeneity, and its limited communications protects the confidentiality of the client-wise reward data. Our theoretical analysis shows the benefit of the proposed algorithm over single-client algorithms. Experimentally, \texttt{PF-PNE} outperforms multiple baselines on both synthetic and real life datasets.