Nguyen, Truong-Huy Dinh (National University of Singapore) | Hsu, David (National University of Singapore) | Lee, Wee-Sun (National University of Singapore) | Leong, Tze-Yun (National University of Singapore) | Kaelbling, Leslie Pack (Massachusetts Institute of Technology) | Lozano-Perez, Tomas (Massachusetts Institute of Technology) | Grant, Andrew Haydn (Singapore-MIT GAMBIT Game Lab)
We apply decision theoretic techniques to construct non-player characters that are able to assist a human player in collaborative games. The method is based on solving Markov decision processes, which can be difficult when the game state is described by many variables. To scale to more complex games, the method allows decomposition of a game task into subtasks, each of which can be modelled by a Markov decision process. Intention recognition is used to infer the subtask that the human is currently performing, allowing the helper to assist the human in performing the correct task. Experiments show that the method can be effective, giving near-human level performance in helping a human in a collaborative game.
When the amount of RDF data is very large, it becomes more likely that the triples describing entities will contain errors and may not include the specification of a class from a known ontology. The work presented here explores the utilization of methods from machine learning to develop classifiers for identifying the semantic categorization of entities based upon the property names used to describe the entity. The goal is to develop classifiers that are accurate, but robust to errors and noise. The training data comes from DBpedia, where entities are categorized by type and densely described with RDF properties. The initial experimentation reported here indicates that the approach is promising.
This paper proposes an automatic model-based approach that enables adaptive decision making in modern virtual games. It builds upon the Integrated MDP and POMDP Learning AgeNT (IMPLANT) architecture which has shown to provide plausible adaptive decision making in modern games. However, it suffers from highly time-consuming manual model specification problems. By incorporating an automated priority sweeping based model builder for the MDP, as well as using the Tactical Agent Personality for the POMDP, the work in this paper aims to resolve these problems. Empirical proof of concept is shown based on an implementation in a modern game scenario, whereby the enhanced IMPLANT agent is shown to exhibit superior adaptation performance over the old IMPLANT agent whilst eliminating manual model specifications and at the same time still maintaining plausible speeds.
Kriesgpiel, or partially observable chess, is appealing to the AI community due to its similarity to real-world applications in which a decision maker is not a lone agent changing the environment. This paper applies the framework of Interactive POMDPs to design a competent Kriegspiel player. The novel element, compared to the existing approaches, is to model the opponent as a competent player and to predict his likely moves. The moves of our own player can then be computed based on these predictions. The problem is challenging because, first, there are many possible world states the agent has to keep track of.
Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.