Rochester Institute of Technology
Training Goal Recognition Online from Low-Level Inputs in an Action-Adventure Game
Gold, Kevin (Rochester Institute of Technology)
A method is presented for training an Input-Output Hidden Markov Model (IOHMM) to identify a player's current goal in an action-adventure game. The goals were Explore, Fight, or Return to Town, which served as the hidden states of the IOHMM. The observation model was trained by directing the player to achieve particular goals and counting actions. When trained on first-time players, training to the specific players did not appear to provide any benefits over a model trained to the experimenter. However, models trained on these players' subsequent trials were significantly better than the models trained to the specific players the first time, and also outperformed the model trained to the experimenter. This suggests that game goal recognition systems are best trained after the players have some time to develop a style of play. Systems for probabilistic reasoning over time could help game designers make games more responsive to players' individual styles and approaches.
Using Machine Translation to Convert Between Difficulties in Rhythm Games
Gold, Kevin (Rochester Institute of Technology) | Olivier, Alex (Wellesley College)
A method is presented for converting between Guitar Hero difficulty levels by treating the problem as one of machine translation, with the different difficulties as different languages. The Guitar Hero I and II discs provide aligned corpora with which to train bigram-based language models and translation models. Given an Expert sequence, the model can create sequences of Hard, Medium, or Easy difficulty that retain the feel of the original, while obeying heuristics typical of those difficulties. Training the model requires a single pass through the corpus, while translation is quadratic in the length of the Expert sequence. The method outperforms a recurrent neural network in producing sequences that match the hand-designed levels. The method may make it easier for amateurs to produce content for the Rock Band Network.
Bypassing Combinatorial Protections: Polynomial-Time Algorithms for Single-Peaked Electorates
Brandt, Felix (Ludwig-Maximilians-Universität München) | Brill, Markus (Ludwig-Maximilians-Universität München) | Hemaspaandra, Edith (Rochester Institute of Technology) | Hemaspaandra, Lane A. (University of Rochester)
For many election systems, bribery (and related) attacks have been shown NP-hard using constructions on combinatorially rich structures such as partitions and covers. It is important to learn how robust these hardness protection results are, in order to find whether they can be relied on in practice. This paper shows that for voters who follow the most central political-science model of electorates — single-peaked preferences — those protections vanish. By using single-peaked preferences to simplify combinatorial covering challenges, we show that NP-hard bribery problems — including those for Kemeny and Llull elections- — fall to polynomial time. By using single-peaked preferences to simplify combinatorial partition challenges, we show that NP-hard partition-of-voters problems fall to polynomial time. We furthermore show that for single-peaked electorates, the winner problems for Dodgson and Kemeny elections, though Θ 2 p -complete in the general case, fall to polynomial time. And we completely classify the complexity of weighted coalition manipulation for scoring protocols in single-peaked electorates.