Goto

Collaborating Authors

 efficient communication


Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities

Neural Information Processing Systems

Variational inequalities are a broad and flexible class of problems that includes minimization, saddle point, and fixed point problems as special cases. Therefore, variational inequalities are used in various applications ranging from equilibrium search to adversarial learning. With the increasing size of data and models, today's instances demand parallel and distributed computing for real-world machine learning problems, most of which can be represented as variational inequalities. Meanwhile, most distributed approaches have a significant bottleneck -- the cost of communications. The three main techniques to reduce the total number of communication rounds and the cost of one such round are the similarity of local functions, compression of transmitted information, and local updates. In this paper, we combine all these approaches. Such a triple synergy did not exist before for variational inequalities and saddle problems, nor even for minimization problems. The methods presented in this paper have the best theoretical guarantees of communication complexity and are significantly ahead of other methods for distributed variational inequalities. The theoretical results are confirmed by adversarial learning experiments on synthetic and real datasets.


Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control

Neural Information Processing Systems

Multi-agent reinforcement learning (MARL) has recently received considerable attention due to its applicability to a wide range of real-world applications. However, achieving efficient communication among agents has always been an overarching problem in MARL. In this work, we propose Variance Based Control (VBC), a simple yet efficient technique to improve communication efficiency in MARL. By limiting the variance of the exchanged messages between agents during the training phase, the noisy component in the messages can be eliminated effectively, while the useful part can be preserved and utilized by the agents for better performance. Our evaluation using multiple MARL benchmarks indicates that our method achieves $2-10\times$ lower in communication overhead than state-of-the-art MARL algorithms, while allowing agents to achieve better overall performance.


Semantic categories of artifacts and animals reflect efficient coding

Zaslavsky, Noga, Regier, Terry, Tishby, Naftali, Kemp, Charles

arXiv.org Artificial Intelligence

It has been argued that semantic categories across languages reflect pressure for efficient communication. Recently, this idea has been cast in terms of a general information-theoretic principle of efficiency, the Information Bottleneck (IB) principle, and it has been shown that this principle accounts for the emergence and evolution of named color categories across languages, including soft structure and patterns of inconsistent naming. However, it is not yet clear to what extent this account generalizes to semantic domains other than color. Here we show that it generalizes to two qualitatively different semantic domains: names for containers, and for animals. First, we show that container naming in Dutch and French is near-optimal in the IB sense, and that IB broadly accounts for soft categories and inconsistent naming patterns in both languages. Second, we show that a hierarchy of animal categories derived from IB captures cross-linguistic tendencies in the growth of animal taxonomies. Taken together, these findings suggest that fundamental information-theoretic principles of efficient coding may shape semantic categories across languages and across domains.


Reviews: Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control

Neural Information Processing Systems

The paper is well written and easy to read. I very much enjoyed reading the paper. If so, please make it explicit for better clarity. This could also motivate the variance based control loss because when there is not much variance in the message, then that agent do not have any preference over which action to choose and hence its message can be safely ignored. I assume that you are using the same communication protocol even during training.


Reviews: Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control

Neural Information Processing Systems

The paper proposes Variance Based Control (VBC) of communications in cooperative multi-agent RL settings. As noted in the Abstract, VBC achieved 2x-10x reduction in communication overhead compared to state-of-the-art MARL settings. The paper also gives a proof of convergence in a tabular setting. In the initial reviews, R4 gave strongest support with a score of 9, while R1 and R2 gave positive overall scores but only at marginally above threshold (6). After receiving the author feedback, there were minimal updates to the original reviews, e.g., R2 said "After going over the author response I appreciate the extra analysis put into comparing the method to MADDPG to make sure it is state of the art. It is good to compare these methods across previous benchmarks to show improvement. While the additional hyperparameter analysis is helpful it is a bit obvious of what is normally done. Some discussion on the effects of specific settings might shed more light on how the method works."


Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities

Neural Information Processing Systems

Variational inequalities are a broad and flexible class of problems that includes minimization, saddle point, and fixed point problems as special cases. Therefore, variational inequalities are used in various applications ranging from equilibrium search to adversarial learning. With the increasing size of data and models, today's instances demand parallel and distributed computing for real-world machine learning problems, most of which can be represented as variational inequalities. Meanwhile, most distributed approaches have a significant bottleneck -- the cost of communications. The three main techniques to reduce the total number of communication rounds and the cost of one such round are the similarity of local functions, compression of transmitted information, and local updates.


Word reuse and combination support efficient communication of emerging concepts

Xu, Aotao, Kemp, Charles, Frermann, Lea, Xu, Yang

arXiv.org Artificial Intelligence

A key function of the lexicon is to express novel concepts as they emerge over time through a process known as lexicalization. The most common lexicalization strategies are the reuse and combination of existing words, but they have typically been studied separately in the areas of word meaning extension and word formation. Here we offer an information-theoretic account of how both strategies are constrained by a fundamental tradeoff between competing communicative pressures: word reuse tends to preserve the average length of word forms at the cost of less precision, while word combination tends to produce more informative words at the expense of greater word length. We test our proposal against a large dataset of reuse items and compounds that appeared in English, French and Finnish over the past century. We find that these historically emerging items achieve higher levels of communicative efficiency than hypothetical ways of constructing the lexicon, and both literal reuse items and compounds tend to be more efficient than their non-literal counterparts. These results suggest that reuse and combination are both consistent with a unified account of lexicalization grounded in the theory of efficient communication.


Is Structure Dependence Shaped for Efficient Communication?: A Case Study on Coordination

Kajikawa, Kohei, Kubota, Yusuke, Oseki, Yohei

arXiv.org Artificial Intelligence

Natural language exhibits various universal properties. But why do these universals exist? One explanation is that they arise from functional pressures to achieve efficient communication, a view which attributes cross-linguistic properties to domain-general cognitive abilities. This hypothesis has successfully addressed some syntactic universal properties such as compositionality and Greenbergian word order universals. However, more abstract syntactic universals have not been explored from the perspective of efficient communication. Among such universals, the most notable one is structure dependence, that is, the existence of grammar-internal operations that crucially depend on hierarchical representations. This property has traditionally been taken to be central to natural language and to involve domain-specific knowledge irreducible to communicative efficiency. In this paper, we challenge the conventional view by investigating whether structure dependence realizes efficient communication, focusing on coordinate structures. We design three types of artificial languages: (i) one with a structure-dependent reduction operation, which is similar to natural language, (ii) one without any reduction operations, and (iii) one with a linear (rather than structure-dependent) reduction operation. We quantify the communicative efficiency of these languages. The results demonstrate that the language with the structure-dependent reduction operation is significantly more communicatively efficient than the counterfactual languages. This suggests that the existence of structure-dependent properties can be explained from the perspective of efficient communication.


Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control

Neural Information Processing Systems

Multi-agent reinforcement learning (MARL) has recently received considerable attention due to its applicability to a wide range of real-world applications. However, achieving efficient communication among agents has always been an overarching problem in MARL. In this work, we propose Variance Based Control (VBC), a simple yet efficient technique to improve communication efficiency in MARL. By limiting the variance of the exchanged messages between agents during the training phase, the noisy component in the messages can be eliminated effectively, while the useful part can be preserved and utilized by the agents for better performance. Our evaluation using multiple MARL benchmarks indicates that our method achieves 2-10\times lower in communication overhead than state-of-the-art MARL algorithms, while allowing agents to achieve better overall performance.


ACE: Abstractions for Communicating Efficiently

Thomas, Jonathan D., Silvi, Andrea, Dubhashi, Devdatt, Garg, Vikas, Johansson, Moa

arXiv.org Artificial Intelligence

A central but unresolved aspect of problem-solving in AI is the capability to introduce and use abstractions, something humans excel at. Work in cognitive science has demonstrated that humans tend towards higher levels of abstraction when engaged in collaborative task-oriented communication, enabling gradually shorter and more information-efficient utterances. Several computational methods have attempted to replicate this phenomenon, but all make unrealistic simplifying assumptions about how abstractions are introduced and learned. Our method, Abstractions for Communicating Efficiently (ACE), overcomes these limitations through a neuro-symbolic approach. On the symbolic side, we draw on work from library learning for proposing abstractions. We combine this with neural methods for communication and reinforcement learning, via a novel use of bandit algorithms for controlling the exploration and exploitation trade-off in introducing new abstractions. ACE exhibits similar tendencies to humans on a collaborative construction task from the cognitive science literature, where one agent (the architect) instructs the other (the builder) to reconstruct a scene of block-buildings. ACE results in the emergence of an efficient language as a by-product of collaborative communication. Beyond providing mechanistic insights into human communication, our work serves as a first step to providing conversational agents with the ability for human-like communicative abstractions.