Goto

Collaborating Authors

 dominion


One Republican Now Controls a Huge Chunk of US Election Infrastructure

WIRED

Former GOP operative Scott Leiendecker just bought Dominion Voting Systems, giving him ownership of voting systems used in 27 states. The news last week that Dominion Voting Systems was purchased by the founder and CEO of Knowink, a Missouri-based maker of electronic poll books, has left election integrity activists confused over what, if anything, this could mean for voters and the integrity of US elections. The company, acquired by Scott Leiendecker, a former Republican Party operative and election director in Missouri before founding Knowink, said in a press release that he was rebranding Dominion, which has headquarters in Canada and the United States, under the name Liberty Vote "in a bold and historic move to transform and improve election integrity in America" and to distance the company from false allegations made previously by President Donald Trump and his supporters that the company had rigged the 2020 presidential election to give the win to President Joe Biden. The Liberty release said that the rebranded company will be 100 percent American owned, that it will have a "paper ballot focus" that leverages hand-marked paper ballots, will "prioritize facilitating third-party auditing," and is "committed to domestic staffing and software development." The press release provided no details, however, to explain what this means in practice.


Seeding for Success: Skill and Stochasticity in Tabletop Games

Goodman, James, Perez-Liebana, Diego, Lucas, Simon

arXiv.org Artificial Intelligence

Games often incorporate random elements in the form of dice or shuffled card decks. This randomness is a key contributor to the player experience and the variety of game situations encountered. There is a tension between a level of randomness that makes the game interesting and contributes to the player enjoyment of a game, and a level at which the outcome itself is effectively random and the game becomes dull. The optimal level for a game will depend on the design goals and target audience. We introduce a new technique to quantify the level of randomness in game outcome and use it to compare 15 tabletop games and disentangle the different contributions to the overall randomness from specific parts of some games. We further explore the interaction between game randomness and player skill, and how this innate randomness can affect error analysis in common game experiments.


Dominion: A New Frontier for AI Research

Halawi, Danny, Sarmasi, Aron, Saltzen, Siena, McCoy, Joshua

arXiv.org Artificial Intelligence

Games have long played a role in AI research, both as a test-bed, and as a moving goal-post, constantly driving innovation. From the heyday of chess agents, when Deep Blue beat Gary Kasparov, to more recent advances, like AlphaGo's dark horse ascent to fame, games have both assisted AI research and provided something to aim for. As the AIs got better, the games they were applied to also got more complex. New game mechanics, such as the fog of war in StarCraft and the stochasticity of Poker, pushed researchers to adapt their methods to ever greater generality. In this paper, we argue that the deck-building strategy game Dominion [1] deserves to join the ranks of AI benchmark games, providing an RL-based bot in service of that benchmark. Dominion has all of the abovementioned elements, but it also incorporates a mechanic that is not present in other popular RL benchmarks: every game is played with a different set of cards. Since each dominion card has a specific rule printed on it, and the set of 10 cards for a game are randomly picked from among hundreds of cards, no two games of Dominion can be approached the same way. Thus a key part of playing Dominion is adapting one's inductive bias of how to play to the specific cards on the table.


Variance Reduction in Monte-Carlo Tree Search

Neural Information Processing Systems

Monte-Carlo Tree Search (MCTS) has proven to be a powerful, generic planning technique for decision-making in single-agent and adversarial environments. The stochastic nature of the Monte-Carlo simulations introduces errors in the value estimates, both in terms of bias and variance. Whilst reducing bias (typically through the addition of domain knowledge) has been studied in the MCTS literature, comparatively little effort has focused on reducing variance. This is somewhat surprising, since variance reduction techniques are a well-studied area in classical statistics. In this paper, we examine the application of some standard techniques for variance reduction in MCTS, including common random numbers, antithetic variates and control variates. We demonstrate how these techniques can be applied to MCTS and explore their efficacy on three different stochastic, single-agent settings: Pig, Can't Stop and Dominion.


Clustering Player Strategies from Variable-Length Game Logs in Dominion

Bendekgey, Henry

arXiv.org Artificial Intelligence

We present a method for encoding game logs as numeric features in the card game Dominion. We then run the manifold learning algorithm t-SNE on these encodings to visualize the landscape of player strategies. By quantifying game states as the relative prevalence of cards in a player's deck, we create visualizations that capture qualitative differences in player strategies. Different ways of deviating from the starting game state appear as different rays in the visualization, giving it an intuitive explanation. This is a promising new direction for understanding player strategies across games that vary in length.


Variance Reduction in Monte-Carlo Tree Search

Veness, Joel, Lanctot, Marc, Bowling, Michael

Neural Information Processing Systems

Monte-Carlo Tree Search (MCTS) has proven to be a powerful, generic planning technique for decision-making in single-agent and adversarial environments. The stochastic nature of the Monte-Carlo simulations introduces errors in the value estimates, both in terms of bias and variance. Whilst reducing bias (typically through the addition of domain knowledge) has been studied in the MCTS literature, comparatively little effort has focused on reducing variance. This is somewhat surprising, since variance reduction techniques are a well-studied area in classical statistics. In this paper, we examine the application of some standard techniques for variance reduction in MCTS, including common random numbers, antithetic variates and control variates. We demonstrate how these techniques can be applied to MCTS and explore their efficacy on three different stochastic, single-agent settings: Pig, Can't Stop and Dominion.


Trigram Timmies and Bayesian Johnnies: Probabilistic Models of Personality in Dominion

Gold, Kevin (Rochester Institute of Technology)

AAAI Conferences

Probabilistic models were fit to logs of player actions in the card game Dominion in an attempt to find evidence of personality types that could be used to classify player behavior as well as generate probabilistic bot behavior. Expectation Maximization seeded with players' self-assessments for their motivations was run for two different model types — Naive Bayes and a trigram model — to uncover three clusters each. For both model structures, most players were classified as belonging to a single large cluster that combined the goals of splashy plays, clever combos, and effective play, cross-cutting the original categories — a cautionary tale for research that assumes players can be classified into one category or another. However, subjects qualitatively report that the different model structures play very differently, with the Naive Bayes model more creatively combining cards.


Dominion -- A constraint solver generator

Kotthoff, Lars

arXiv.org Artificial Intelligence

This paper proposes a design for a system to generate constraint solvers that are specialised for specific problem models. It describes the design in detail and gives preliminary experimental results showing the feasibility and effectiveness of the approach.