Plotting

 Country


Comparing Agents' Success against People in Security Domains

AAAI Conferences

The interaction of people with autonomous agents has become increasingly prevalent. Some of these settings include security domains, where people can be characterized as uncooperative, hostile, manipulative, and tending to take advantage of the situation for their own needs. This makes it challenging to design proficient agents to interact with people in such environments. Evaluating the success of the agents automatically before evaluating them with people or deploying them could alleviate this challenge and result in better designed agents. In this paper we show how Peer Designed Agents (PDAs) -- computer agents developed by human subjects -- can be used as a method for evaluating autonomous agents in security domains. Such evaluation can reduce the effort and costs involved in evaluating autonomous agents interacting with people to validate their efficacy. Our experiments included more than 70 human subjects and 40 PDAs developed by students. The study provides empirical support that PDAs can be used to compare the proficiency of autonomous agents when matched with people in security domains.


Branch and Price for Multi-Agent Plan Recognition

AAAI Conferences

The problem of identifying the (dynamic) team structures and team behaviors from the observed activities of multiple agents is called Multi-Agent Plan Recognition (MAPR). We extend a recent formalization of this problem to accommodate a compact, partially ordered, multi-agent plan language, as well as complex plan execution models — particularly plan abandonment and activity interleaving. We adopt a branch and price approach to solve MAPR in such a challenging setting, and fully instantiate the (generic) pricing problem for MAPR. We show experimentally that this approach outperforms a recently proposed hypothesis pruning algorithm in two domains: multi-agent blocks word, and intrusion detection. The key benefit of the branch and price approach is its ability to grow the necessary component (occurrence) space from which the hypotheses are constructed, rather than begin with a fully enumerated component space that has an intractable size, and search it with pruning. Our formulation of MAPR has the broad objective of bringing mature Operations Research methodologies to bear upon MAPR, envisaged to have a similar impact as mature SAT-solvers had on planning.


The Inter-League Extension of the Traveling Tournament Problem and its Application to Sports Scheduling

AAAI Conferences

With the recent inclusion of inter-league games to professional sports leagues, a natural question is to determine the "best possible" inter-league schedule that retains all of the league's scheduling constraints to ensure competitive balance and fairness, while minimizing the total travel distance for both economic and environmental efficiency. To answer that question, this paper introduces the Bipartite Traveling Tournament Problem (BTTP) , the inter-league extension of the well-studied Traveling Tournament Problem. We prove that the 2n -team BTTP is NP-complete, but for small values of n , a distance-optimal inter-league schedule can be generated from an algorithm based on minimum-weight 4-cycle-covers. We apply our algorithm to the 12-team Nippon Professional Baseball (NPB) league in Japan, creating an inter-league tournament that reduces total team travel by 16% compared to the actual schedule played by these teams during the 2010 NPB season. We also analyze the problem of inter-league scheduling for the 30-team National Basketball Association (NBA), and develop a tournament schedule whose total inter-league travel distance is just 3.8% higher than the trivial theoretical lower bound.  


Selective Transfer Between Learning Tasks Using Task-Based Boosting

AAAI Conferences

The success of transfer learning on a target task is highly dependent on the selected source data. Instance transfer methods reuse data from the source tasks to augment the training data for the target task. If poorly chosen, this source data may inhibit learning, resulting in negative transfer. The current most widely used algorithm for instance transfer, TrAdaBoost, performs poorly when given irrelevant source data. We present a novel task-based boosting technique for instance transfer that selectively chooses the source knowledge to transfer to the target task. Our approach performs boosting at both the instance level and the task level, assigning higher weight to those source tasks that show positive transferability to the target task, and adjusting the weights of individual instances within each source task via AdaBoost. We show that this combination of task- and instance-level boosting significantly improves transfer performance over existing instance transfer algorithms when given a mix of relevant and irrelevant source data, especially for small amounts of data on the target task.


Decentralised Control of Micro-Storage in the Smart Grid

AAAI Conferences

Smart meters are intended to allow suppliers electricity network technologies, collectively called to access detailed energy consumption data and, more the smart grid (US Department Of Energy 2003; Galvin importantly, provide network information, such as real-time and Yeager 2008; UK Department of Energy and Climate pricing (RTP) signals, to consumers in an attempt to better Change 2009). A major component of this future vision is control or reduce demand when electricity is expensive that of energy storage. In particular, there is potential seen or carbon intensive on the grid (Hammerstrom et al. 2008; in the widespread adoption of small scale consumer storage Smith 2010). Accordingly, we envisage that micro-storage devices (i.e., micro-storage), which would allow consumers will be controlled by autonomous software agents that will to store electricity when demand is low, in order for react to RTP signals to minimise their owner's costs (i.e., it to be used during peak loads (Bathurst and Strbac 2003; they are self-interested). In this vein, we note our recent Ramchurn et al. 2011a; Vytelingum et al. 2010). This technology work (Vytelingum et al. 2010) in which we showed that, has the added advantage that it requires no significant when acting purely selfishly, large numbers of micro-storage change in how home appliances are used, and thus allows agents can cause instability in the aggregate demand profile.


Efficiently Learning a Distance Metric for Large Margin Nearest Neighbor Classification

AAAI Conferences

We concern the problem of learning a Mahalanobis distance metric for improving nearest neighbor classification. Our work is built upon the large margin nearest neighbor (LMNN) classification framework. Due to the semidefiniteness constraint in the optimization problem of LMNN, it is not scalable in terms of the dimensionality of the input data. The original LMNN solver partially alleviates this problem by adopting alternating projection methods instead of standard interior-point methods. Still, at each iteration, the computation complexity is at least O(D 3 ) (D is the dimension of input data). In this work, we propose a column generation based algorithm to solve the LMNN optimization problem much more efficiently. Our algorithm is much more scalable in tha tat each iteration, it does not need full eigen-decomposition. Instead, we only need to find the leading eigen value and its corresponding eigen vector, which is of O(D 2 ) complexity. Experiments show the efficiency and efficacy of our algorithms.


Memory-Efficient Dynamic Programming for Learning Optimal Bayesian Networks

AAAI Conferences

We describe a memory-efficient implementation of a dynamic programming algorithm for learning the optimal structure of a Bayesian network from training data. The algorithm leverages the layered structure of the dynamic programming graphs representing the recursive decomposition of the problem to reduce the memory requirements of the algorithm from O(n2 n ) to O(C(n, n/2)), where C(n, n/2) is the binomial coefficient. Experimental results show that the approach runs up to an order of magnitude faster and scales to datasets with more variables than previous approaches.


Solving 4x5 Dots-And-Boxes

AAAI Conferences

Dots-And-Boxes is a well-known and widely-played combinatorial game. While the rules of play are very simple, the state space for even small games is extremely large, and finding the outcome under optimal play is correspondingly hard. In this paper we introduce a Dots-And-Boxes solver which is significantly faster than the current state-of-the-art: over an order-of-magnitude faster on several large problems. We describe our approach, which uses Alpha-Beta search and applies a number of techniques—both problem-specific and general—to reduce the number of duplicate states explored and reduce the search space to a manageable size. Using these techniques, we have determined for the first time that Dots- And-Boxes on a board of 4x5 boxes is a tie given optimal play. This is the largest game solved to date.


Efficient Methods for Lifted Inference with Aggregate Factors

AAAI Conferences

Aggregate factors (that is, those based on aggregate functions such as SUM, AVERAGE, AND etc.) in probabilistic relational models can compactly represent dependencies among a large number of relational random variables. However, propositional inference on a factor aggregating n k -valued random variables into an r -valued result random variable is O ( r k 2 n ). Lifted methods can ameliorate this to O ( r n k ) in general and O ( r k log n ) for commutative associative aggregators. In this paper, we propose (a) an exact solution constant in n when k  = 2 for certain aggregate operations such as AND, OR and SUM, and (b) a close approximation for inference with aggregate factors with time complexity constant in n . This approximate inference involves an analytical solution for some operations when k > 2. The approximation is based on the fact that the typically used aggregate functions can be represented by linear constraints in the standard ( k  –1)-simplex in R k where k is the number of possible values for random variables. This includes even aggregate functions that are commutative but not associative (e.g., the MODE operator that chooses the most frequent value). Our algorithm takes polynomial time in k (which is only 2 for binary variables) regardless of r and n, and the error decreases as n increases. Therefore, for most applications (in which a close approximation suffices) our algorithm is a much more efficient solution than existing algorithms. We present experimental results supporting these claims. We also present a (c) third contribution which further optimizes aggregations over multiple groups of random variables with distinct distributions.