Goto

Collaborating Authors

 Uppsala University


Phragmén’s Voting Methods and Justified Representation

AAAI Conferences

In the late 19th century, Lars Edvard Phragmén proposed a load-balancing approach for selecting committees based on approval ballots. We consider three committee voting rules resulting from this approach: two optimization variants one minimizing the maximal load and one minimizing the variance of loads —and a sequential variant. We study Phragmén's methods from an axiomatic point of view, focussing on justified representation and related properties that have recently been introduced by Aziz et al. (2015a) and Sánchez-Fernández et al. (2017). We show that the sequential variant satisfies proportional justified representation, making it the first known polynomial-time computable method with this property. Moreover, we show that the optimization variants satisfy perfect representation. We also analyze the com- putational complexity of Phragmén's methods and provide mixed-integer programming based algorithms for computing them.


The AIIDE 2015 Workshop Program

AI Magazine

The workshop program at the Eleventh Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment was held November 14–15, 2015 at the University of California, Santa Cruz, USA. The program included 4 workshops (one of which was a joint workshop): Artificial Intelligence in Adversarial Real-Time Games, Experimental AI in Games, Intelligent Narrative Technologies and Social Believability in Games, and Player Modeling. This article contains the reports of three of the four workshops.


A Propagator Design Framework for Constraints over Sequences

AAAI Conferences

Constraints over variable sequences are ubiquitous and many of their propagators have been inspired by dynamic programming (DP). We propose a conceptual framework for designing such propagators: pruning rules, in a functional notation, are refined upon the application of transformation operators to a DP-style formulation of a constraint; a representation of the (tuple) variable domains is picked; and a control of the pruning rules is picked.


#hardtoparse: POS Tagging and Parsing the Twitterverse

AAAI Conferences

We evaluate the statistical dependency parser, Malt, on a new dataset of sentences taken from tweets. We use a version of Malt which is trained on gold standard phrase structure Wall Street Journal (WSJ) trees converted to Stanford labelled dependencies. We observe a drastic drop in performance moving from our in-domain WSJ test set to the new Twitter dataset, much of which has to do with the propagation of part-of-speech tagging errors. Retraining Malt on dependency trees produced by a state-of-the-art phrase structure parser, which has itself been self-trained on Twitter material, results in a significant improvement. We analyse this improvement by examining in detail the effect of the retraining on individual dependency types.