Plotting

 Country


Balancing Safety and Exploitability in Opponent Modeling

AAAI Conferences

Opponent modeling is a critical mechanism in repeated games. It allows a player to adapt its strategy in order to better respond to the presumed preferences of his opponents. We introduce a new modeling technique that adaptively balances exploitability and risk reduction. An opponent’s strategy is modeled with a set of possible strategies that contain the actual strategy with a high probability. The algorithm is safe as the expected payoff is above the minimax payoff with a high probability, and can exploit the opponents’ preferences when sufficient observations have been obtained. We apply them to normal-form games and stochastic games with a finite number of stages. The performance of the proposed approach is first demonstrated on repeated rock-paper-scissors games. Subsequently, the approach is evaluated in a human-robot table-tennis setting where the robot player learns to prepare to return a served ball. By modeling the human players, the robot chooses a forehand, backhand or middle preparation pose before they serve. The learned strategies can exploit the opponent’s preferences, leading to a higher rate of successful returns.


Improving Cost-Optimal Domain-Independent Symbolic Planning

AAAI Conferences

Symbolic search with BDDs has shown remarkable performance for cost-optimal deterministic planning by exploiting a succinct representation and exploration of state sets. In this paper we enhance BDD-based planning by applying a combination of domain-independent search techniques: the optimization of the variable ordering in the BDD by approximating the linear arrangement problem, pattern selection for improved construction of search heuristics in form of symbolic partial pattern databases, and a decision procedure for the amount of bidirection in the symbolic search process.


A Nonparametric Bayesian Model of Multi-Level Category Learning

AAAI Conferences

Categories are often organized into hierarchical taxonomies, that is, tree structures where each node represents a labeled category, and a node's parent and children are, respectively, the category's supertype and subtypes. A natural question is whether it is possible to reconstruct category taxonomies in cases where we are not given explicit information about how categories are related to each other, but only a sample of observations of the members of each category. In this paper, we introduce a nonparametric Bayesian model of multi-level category learning, an extension of the hierarchical Dirichlet process (HDP) that we call the tree-HDP. We demonstrate the ability of the tree-HDP to reconstruct simulated datasets of artificial taxonomies, and show that it produces similar performance to human learners on a taxonomy inference task.


A Modular Consistency Proof for DOLCE

AAAI Conferences

We propose a novel technique for proving the consistency of large, complex and heterogeneous theories for which ‘standard’ automated reasoning methods are considered insufficient. In particular, we exemplify the applicability of the method by establishing the consistency of the foundational ontology DOLCE, a large, first-order ontology. The approach we advocate constructs a global model for a theory, in our case DOLCE, built from smaller models of subtheories together with amalgamability properties between such models. The proof proceeds by (i) hand-crafting a so-called architectural specification of DOLCE which reflects the way models of the theory can be built, (ii) an automated verification of the amalgamability conditions, and (iii) a (partially automated) series of relative consistency proofs.


Transfer Learning for Multiple-Domain Sentiment Analysis — Identifying Domain Dependent/Independent Word Polarity

AAAI Conferences

Sentiment analysis is the task of determining the attitude (positive or negative) of documents. While the polarity of words in the documents is informative for this task, polarity of some words cannot be determined without domain knowledge. Detecting word polarity thus poses a challenge for multiple-domain sentiment analysis. Previous approaches tackle this problem with transfer learning techniques, but they cannot handle multiple source domains and multiple target domains. This paper proposes a novel Bayesian probabilistic model to handle multiple source and multiple target domains. In this model, each word is associated with three factors: Domain label, domain dependence/independence and word polarity. We derive an efficient algorithm using Gibbs sampling for inferring the parameters of the model, from both labeled and unlabeled texts. Using real data, we demonstrate the effectiveness of our model in a document polarity classification task compared with a method not considering the differences between domains. Moreover our method can also tell whether each word's polarity is domain-dependent or domain-independent. This feature allows us to construct a word polarity dictionary for each domain.


Learning Structured Embeddings of Knowledge Bases

AAAI Conferences

Many Knowledge Bases (KBs) are now readily available and encompass colossal quantities of information thanks to either a long-term funding effort (e.g. WordNet, OpenCyc) or a collaborative process (e.g. Freebase, DBpedia). However, each of them is based on a different rigorous symbolic framework which makes it hard to use their data in other systems. It is unfortunate because such rich structured knowledge might lead to a huge leap forward in many other areas of AI like nat- ural language processing (word-sense disambiguation, natural language understanding, ...), vision (scene classification, image semantic annotation, ...) or collaborative filtering. In this paper, we present a learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced. These learnt embeddings would allow data from any KB to be easily used in recent machine learning meth- ods for prediction and information retrieval. We illustrate our method on WordNet and Freebase and also present a way to adapt it to knowledge extraction from raw text.


Discovering Life Cycle Assessment Trees from Impact Factor Databases

AAAI Conferences

In recent years, environmental sustainability has received widespread attention due to continued depletion of natural resources and degradation of the environment. Life cycle assessment (LCA) is a methodology for quantifying multiple environmental impacts of a product, across its entire life cycle — from creation to use to discard. The key object of interest in LCA is the inventory tree, with the desired product as the root node and the materials and processes used across its life cycle as the children. The total impact of the parent in any environmental category is a linear combination of the impacts of the children in that category. LCA has generally been used in "forward: mode: given an inventory tree and impact factors of its children, the task is to compute the impact factors of the root, i.e., the product being modeled. We propose a data mining approach to solve the inverse problem, where the task is to infer inventory trees from a database of environmental factors. This is an important problem with applications in not just understanding what parts and processes constitute a product but also in designing and developing more sustainable alternatives. Our solution methodology is one of feature selection but set in the context of a non-negative least squares problem. It organizes numerous non-negative least squares fits over the impact factor database into a set of pairwise membership relations which are then summarized into candidate trees in turn yielding a consensus tree. We demonstrate the applicability of our approach over real LCA datasets obtained from a large computer manufacturer.


Multi-Task Learning in Square Integrable Space

AAAI Conferences

Several kernel based methods for multi-task learning have been proposed, which leverage relations among tasks as regularization to enhance the overall learning accuracies. These methods assume that the tasks share the same kernel, which could limit their applications because in practice different tasks may need different kernels. The main challenge of introducing multiple kernels into multiple tasks is that models from different Reproducing Kernel Hilbert Spaces (RKHSs) are not comparable, making it difficult to exploit relations among tasks. This paper addresses the challenge by formalizing the problem in the Square Integrable Space (SIS). Specially, it proposes a kernel based method which makes use of a regularization term defined in the SIS to represent task relations. We prove a new representer theorem for the proposed approach in SIS. We further derive a practical method for solving the learning problem and conduct consistency analysis of the method. We discuss the relations between our method and an existing method. We also give an SVM based implementation of our method for multi-label classification. Experiments on two real-world data sets show that the proposed method performs better than the existing method.


WikiSimple: Automatic Simplification of Wikipedia Articles

AAAI Conferences

Text simplification aims to rewrite text into simpler versions and thus make information accessible to a broader audience (e.g., non-native speakers, children, and individuals with language impairments). In this paper, we propose a model that simplifies documents automatically while selecting their most important content and rewriting them in a simpler style. We learn content selection rules from same-topic Wikipedia articles written in the main encyclopedia and its Simple English variant. We also use the revision histories of Simple Wikipedia articles to learn a quasi-synchronous grammar of simplification rewrite rules. Based on an integer linear programming formulation, we develop a joint model where preferences based on content and style are optimized simultaneously. Experiments on simplifying main Wikipedia articles show that our method significantly reduces the reading difficulty, while still capturing the important content.


Fast Query Recommendation by Search

AAAI Conferences

Query recommendation can not only effectively facilitate users to obtain their desired information but alsoincrease ads’ click-through rates. This paper presentsa general and highly efficient method for query recommendation. Given query sessions, we automatically generate many similar and dissimilar query-pairs as the prior knowledge. Then we learn a transformation from the prior knowledge to move similar queries closer such that similar queries tend to have similar hash values.This is formulated as minimizing the empirical error on the prior knowledge while maximizing the gap between the data and some partition hyperplanes randomly generated in advance. In the recommendation stage, we search queries that have similar hash values to the given query, rank the found queries and return the top K queries as the recommendation result. All the experimental results demonstrate that our method achieves encouraging results in terms of efficiency and recommendation performance.