Goto

Collaborating Authors

 Country


Patrol Strategies to Maximize Pristine Forest Area

AAAI Conferences

Illegal extraction of forest resources is fought, in many developing countries, by patrols that try to make this activity less profitable, using the threat of confiscation. With a limited budget, officials will try to distribute the patrols throughout the forest intelligently, in order to most effectively limit extraction. Prior work in forest economics has formalized this as a Stackelberg game, one very different in character from the discrete Stackelberg problem settings previously studied in the multiagent literature. Specifically, the leader wishes to minimize the distance by which a profit-maximizing extractor will trespass into the forest---or to maximize the radius of the remaining ``pristine'' forest area. The follower's cost-benefit analysis of potential trespass distances is affected by the likelihood of being caught and suffering confiscation. In this paper, we give a near-optimal patrol allocation algorithm and a 1/2-approximation algorithm, the latter of which is more efficient and yields simpler, more practical patrol allocations. Our simulations indicate that these algorithms substantially outperform existing heuristic allocations.


Fairness and Welfare Through Redistribution When Utility Is Transferable

AAAI Conferences

We join the goals of two giant and related fields of research in group decision-making that have historically had little contact: fair division, and efficient mechanism design with monetary payments. To do this we adopt the standard mechanism design paradigm where utility is assumed to be quasilinear and thus transferable across agents. We generalize the traditional binary criteria of envy-freeness, proportionality, and efficiency (welfare) to measures of degree that range between 0 and 1. We demonstrate that in the canonical fair division settings under any allocatively-efficient mechanism the worst-case welfare rate is 0 and disproportionality rate is 1; in other words, the worst-case results are as bad as possible. This strongly motivates an average-case analysis. We then set as the goal identification of a mechanism that achieves high welfare, low envy, and low disproportionality in expectation across a spectrum of fair division settings. We establish that the VCG mechanism is not a satisfactory candidate, but the redistribution mechanism of [Bailey, 1997; Cavallo, 2006] is.


Generating Chinese Classical Poems with Statistical Machine Translation Models

AAAI Conferences

This paper describes a statistical approach to generation of Chinese classical poetry and proposes a novel method to automatically evaluate poems. The system accepts a set of keywords representing the writing intents from a writer and generates sentences one by one to form a completed poem. A statistical machine translation (SMT) system is applied to generate new sentences, given the sentences generated previously. For each line of sentence a specific model specially trained for that line is used, as opposed to using a single model for all sentences. To enhance the coherence of sentences on every line, a coherence model using mutual information is applied to select candidates with better consistency with previous sentences. In addition, we demonstrate the effectiveness of the BLEU metric for evaluation with a novel method of generating diverse references.


Rule Ensemble Learning Using Hierarchical Kernels in Structured Output Spaces

AAAI Conferences

The goal in Rule Ensemble Learning (REL) is simultaneous discovery of a small set of simple rules and their optimal weights that lead to good generalization. Rules are assumed to be conjunctions of basic propositions concerning the values taken by the input features. It has been shown that rule ensembles for classification can be learnt optimally and efficiently using hierarchical kernel learning approaches that explore the exponentially large space of conjunctions by exploiting its hierarchical structure. The regularizer employed penalizes large features and thereby selects a small set of short features. In this paper, we generalize the rule ensemble learning using hierarchical kernels (RELHKL) framework to multi class structured output spaces. We build on the StructSVM model for sequence prediction problems and employ a ρ-norm hierarchical regularizer for observation features and a conventional 2-norm regularizer for state transition features. The exponentially large feature space is searched using an active set algorithm and the exponentially large set of constraints are handled using a cutting plane algorithm. The approach can be easily extended to other structured output problems. We perform experiments on activity recognition datasets which are prone to noise, sparseness and skewness. We demonstrate that our approach outperforms other approaches.


Convex Kernelized Sorting

AAAI Conferences

Kernelized sorting is a method for aligning objects across two domains by considering within-domain similarity, without a need to specify a cross-domain similarity measure. In this paper we present the Convex Kernelized Sorting method where, unlike in the previous approaches, the cross-domain object matching is formulated as a convex optimization problem, leading to simpler optimization and global optimum solution. Our method outputs soft alignments between objects, which can be used to rank the best matches for each object, or to visualize the object matching and verify the correct choice of the kernel. It also allows for computing hard one-to-one alignments by solving the resulting Linear Assignment Problem. Experiments on a number of cross-domain matching tasks show the strength of the proposed method, which consistently achieves higher accuracy than the existing methods.


Optimal Auctions for Spiteful Bidders

AAAI Conferences

Designing revenue-optimal auctions for various settings is perhaps the most important, yet sometimes most elusive, problem in mechanism design. Spiteful bidders have been intensely studied recently, especially because spite occurs in many applications in multiagent system and electronic commerce. We derive the optimal auction for such bidders (as well as bidders that are altruistic). It is a generalization of Myerson’s (1981) auction. It chooses an allocation that maximizes agents’ virtual valuations, but for a generalized definition of virtual valuation. The payment rule is less intuitive. For one, it takes each bidder’s own report into consideration when determining his payment. Moreover, bidders pay even if the seller keeps the item; a similar phenomenon has been shown in other settings with neg- ative externalities (Jehiel, Moldovanu, and Stacchetti 1996; Deng and Pekec 2011). On the other hand, a novel aspect of our auction is that it sometimes subsidizes losers when the item is sold to some other bidder. We also derive a revenue equivalence theorem for this setting. Using it, we generate a short proof of (a slight generalization of) the previously known result that, in two-bidder settings with independently uniformly drawn valuations, second-price auctions yield greater expected revenue than first-price auctions. Finally, we present a template for comparing the expected revenues of any two auction mechanisms that have the same allocation rule (for the valuations distributions at hand).


Modeling the Evolution of Knowledge in Learning Systems

AAAI Conferences

How do reasoning systems that learn evolve over time? What are the properties of different learning strategies? Characterizing the evolution of these systems is important for understanding their limitations and gaining insights into the interplay between learning and reasoning. We describe an inverse ablation model for studying how large knowledge-based systems evolve: Create a small knowledge base by ablating a large KB, and simulate learning by incrementally re-adding facts, using different strategies to simulate types of learners. For each iteration, reasoning properties (including number of questions answered and run time) are collected, to explore how learning strategies and reasoning interact. We describe several experiments with the inverse ablation model, examining how two different learning strategies perform. Our results suggest that different concepts show different rates of growth, and that the density and distribution of facts that can be learned are important parameters for modulating the rate of learning.


Enriching Chatter Bots With Semantic Conversation Control

AAAI Conferences

Businesses deploy chatter bots to engage in text-based conversations with customers that are intended resolve their issues. However, these chatter bots are only effective in exchanges consisting of question-answer pairs, where the context may switch with every pair. I am designing a semantic architecture that enables chatter bots to hold short conversations, where context is maintained throughout the exchange. I leverage specific ideas from conversation theory, speech acts theory, and knowledge representation. My architecture models a conversation as a stochastic process that flows through a set of states. The main contribution of this work is that it analyses and models the semantics of conversations as entities, instead of lower level grammatical and linguistics forms. I evaluate the performance of the architecture in accordance with Grice’s cooperative maxims, which form the central idea in the theory of pragmatics.


Improving Request Compliance through Robot Affect

AAAI Conferences

This paper describes design and results of a human-robot interaction study aimed at determining the extent to which affective robotic behavior can influence participants' compliance with a humanoid robot’s request in the context of a mock-up search-and-rescue setting. The results of the study argue for inclusion of affect into robotic systems, showing that nonverbal expressions of negative mood (nervousness) and fear by the robot improved the participants' compliance with its request to evacuate, causing them to respond earlier and faster.


Generating Coherent Summaries with Textual Aspects

AAAI Conferences

Initiated by TAC 2010, aspect-guided summaries not only address specific user need, but also ameliorate content-level coherence by using aspect information. This paper presents a full-fledged system composed of three modules: finding sentence-level textual aspects, modeling aspect-based coherence with an HMM model, and selecting and ordering sentences with aspect information to generate coherent summaries. The evaluation results on the TAC 2011 datasets show the superiority of aspect-guided summaries in terms of both information coverage and textual coherence.