Goto

Collaborating Authors

 voting weight


Appendix 420 A Missing Proofs of Section 4 421

Neural Information Processing Systems

We start by proving statement (ii). We now prove statement (iii). The last constraint is trivially satisfied. This can be easily shown by induction. 's constraint remains equal when Let's pick such a branching Moreover, observe that every edge in B is tight.



Appendix 420 A Missing Proofs of Section 4 421

Neural Information Processing Systems

We start by proving statement (ii). We now prove statement (iii). The last constraint is trivially satisfied. This can be easily shown by induction. 's constraint remains equal when Let's pick such a branching Moreover, observe that every edge in B is tight.



Delegations as Adaptive Representation Patterns: Rethinking Influence in Liquid Democracy

Grossi, Davide, Nitsche, Andreas

arXiv.org Artificial Intelligence

Liquid democracy is a mechanism for the division of labor in decision-making through the transitive delegation of influence. In essence, all individuals possess the autonomy to determine the issues with which they will engage directly, while for other matters, they may appoint a representative of their choosing. So far, the literature has studied the delegation structures emerging in liquid democracy as static. As a result, transitivity defined as the capacity to transfer acquired authority to another entity, has been identified as a concern as it would be conducive to unrestrained accumulation of power. Focusing on the implementation of liquid democracy supported by the LiquidFeedback software, we propose a novel approach to assessing the influence of voting nodes in a transitive delegation graph, taking into account the process nature of real-world liquid democracy in which delegation and voting are distinct and increasingly independent activities. By introducing a novel model of delegations in liquid democracy, we show how transitivity may in fact contribute to an effective regulation of deliberation influence and decision-making power. While maintaining the one-person, one-vote paradigm for all votes cast, the anticipated influence of an agent, to the extent it is stemming from transitivity, experiences a precipitous decline following an exponential trajectory. In general, it is our objective to move the first steps towards a rigorous analysis of liquid democracy as an adaptive democratic representation process. The adaptivity aspect of liquid democracy has not yet been explored within the existing academic literature despite it being, we believe, one of its most important features. We therefore also outline a research agenda focusing on this aspect of liquid democracy.


Generalizing Liquid Democracy to multi-agent delegation: A Voting Power Measure and Equilibrium Analysis

Bersetche, Francisco M.

arXiv.org Artificial Intelligence

Liquid democracy has gained popularity in recent years due to its ability to balance representation and delegation of power. In this work, we propose a generalization of the classic model that allows for fractional delegation of voting weight. Our approach enables agents to divide and delegate their votes to multiple agents, while retaining a portion of the voting power for themselves. We discuss the desirable properties of a reasonable generalization of the classic model and introduce a set of simpler voting measures that include a penalty factor on the length of delegation chains. We demonstrate that the proposed voting measure is a well-defined limit of these simpler measures when the penalty approaches zero, and inherits key features of the classic model. In the second part of the article, we investigate the existence of equilibrium states in a delegation game that employs the suggested measures. We show that this game has pure strategy Nash equilibria as long as a penalty on the length of delegation chains is enforced.


Selecting Representative Bodies: An Axiomatic View

Revel, Manon, Boehmer, Niclas, Colley, Rachael, Brill, Markus, Faliszewski, Piotr, Elkind, Edith

arXiv.org Artificial Intelligence

As the world's democratic institutions are challenged by dissatisfied citizens, political scientists and also computer scientists have proposed and analyzed various (innovative) methods to select representative bodies, a crucial task in every democracy. However, a unified framework to analyze and compare different selection mechanisms is missing, resulting in very few comparative works. To address this gap, we advocate employing concepts and tools from computational social choice in order to devise a model in which different selection mechanisms can be formalized. Such a model would allow for desirable representation axioms to be conceptualized and evaluated. We make the first step in this direction by proposing a unifying mathematical formulation of different selection mechanisms as well as various social-choice-inspired axioms such as proportionality and monotonicity.


Scalable Teacher Forcing Network for Semi-Supervised Large Scale Data Streams

Pratama, Mahardhika, Za'in, Choiru, Lughofer, Edwin, Pardede, Eric, Rahayu, Dwi A. P.

arXiv.org Artificial Intelligence

The large-scale data stream problem refers to high-speed information flow which cannot be processed in scalable manner under a traditional computing platform. This problem also imposes expensive labelling cost making the deployment of fully supervised algorithms unfeasible. On the other hand, the problem of semi-supervised large-scale data streams is little explored in the literature because most works are designed in the traditional single-node computing environments while also being fully supervised approaches. This paper offers Weakly Supervised Scalable Teacher Forcing Network (WeScatterNet) to cope with the scarcity of labelled samples and the large-scale data streams simultaneously. WeScatterNet is crafted under distributed computing platform of Apache Spark with a data-free model fusion strategy for model compression after parallel computing stage. It features an open network structure to address the global and local drift problems while integrating a data augmentation, annotation and auto-correction ($DA^3$) method for handling partially labelled data streams. The performance of WeScatterNet is numerically evaluated in the six large-scale data stream problems with only $25\%$ label proportions. It shows highly competitive performance even if compared with fully supervised learners with $100\%$ label proportions.


Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments

Ashfahani, Andri, Pratama, Mahardhika

arXiv.org Machine Learning

The feasibility of deep neural networks (DNNs) to address data stream problems still requires intensive study because of the static and offline nature of conventional deep learning approaches. A deep continual learning algorithm, namely autonomous deep learning (ADL), is proposed in this paper. Unlike traditional deep learning methods, ADL features a flexible structure where its network structure can be constructed from scratch with the absence of initial network structure via the self-constructing network structure. ADL specifically addresses catastrophic forgetting by having a different-depth structure which is capable of achieving a trade-off between plasticity and stability. Network significance (NS) formula is proposed to drive the hidden nodes growing and pruning mechanism. Drift detection scenario (DDS) is put forward to signal distributional changes in data streams which induce the creation of a new hidden layer. Maximum information compression index (MICI) method plays an important role as a complexity reduction module eliminating redundant layers. The efficacy of ADL is numerically validated under the prequential test-then-train procedure in lifelong environments using nine popular data stream problems. The numerical results demonstrate that ADL consistently outperforms recent continual learning methods while characterizing the automatic construction of network structures.


An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams

Pratama, Mahardhika, Pedrycz, Witold, Webb, Geoffrey I.

arXiv.org Artificial Intelligence

Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-of the art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy.