Plotting

 University of British Columbia


TRUSTS: Scheduling Randomized Patrols for Fare Inspection in Transit Systems Using Game Theory

AI Magazine

In proof-of-payment transit systems, passengers are legally required to purchase tickets before entering but are not physically forced to do so. Instead, patrol units move about the transit system, inspecting the tickets of passengers, who face fines if caught fare evading. The deterrence of fare evasion depends on the unpredictability and effectiveness of the patrols. In this paper, we present TRUSTS, an application for scheduling randomized patrols for fare inspection in transit systems. TRUSTS models the problem of computing patrol strategies as a leader-follower Stackelberg game where the objective is to deter fare evasion and hence maximize revenue. This problem differs from previously studied Stackelberg settings in that the leader strategies must satisfy massive temporal and spatial constraints; moreover, unlike in these counterterrorism-motivated Stackelberg applications, a large fraction of the ridership might realistically consider fare evasion, and so the number of followers is potentially huge. A third key novelty in our work is deliberate simplification of leader strategies to make patrols easier to be executed. We present an efficient algorithm for computing such patrol strategies and present experimental results using real-world ridership data from the Los Angeles Metro Rail system. The Los Angeles County Sheriff’s department is currently carrying out trials of TRUSTS.


An Intelligent Powered Wheelchair for Users with Dementia: Case Studies with NOAH (Navigation and Obstacle Avoidance Help)

AAAI Conferences

Intelligent wheelchairs can help increase independent mobility for elderly residents with cognitive impairment, who are currently excluded from the use of powered wheelchairs. This paper presents three case studies, demonstrating the efficacy of the NOAH (Navigation and Obstacle Avoidance Help) system. The findings reported can be used to refine our understanding of user needs and help identify methods to improve the quality of life of the intended users.


On Case Base Formation in Real-Time Heuristic Search

AAAI Conferences

Real-time heuristic search algorithms obey a constant limit on planning time per move. Agents using these algorithms can execute each move as it is computed, suggesting a strong potential for application to real-time video-game AI. Recently, a breakthrough in real-time heuristic search performance was achieved through the use of case-based reasoning. In this framework, the agent optimally solves a set of problems and stores their solutions in a case base. Then, given any new problem, it seeks a similar case in the case base and uses its solution as an aid to solve the problem at hand. A number of ad hoc approaches to the case base formation problem have been proposed and empirically shown to perform well. In this paper, we investigate a theoretically driven approach to solving the problem. We mathematically relate properties of a case base to the suboptimality of the solutions it produces and subsequently develop an algorithm that addresses these properties directly. An empirical evaluation shows our new algorithm outperforms the existing state of the art on contemporary video-game pathfinding benchmarks.


A Search Algorithm for Latent Variable Models with Unbounded Domains

AAAI Conferences

This paper concerns learning and prediction with probabilistic models where the domain sizes of latent variables have no a priori upper-bound. Current approaches represent prior distributions over latent variables by stochastic processes such as the Dirichlet process, and rely on Monte Carlo sampling to estimate the model from data. We propose an alternative approach that searches over the domain size of latent variables, and allows arbitrary priors over the their domain sizes. We prove error bounds for expected probabilities, where the error bounds diminish with increasing search scope. The search algorithm can be truncated at any time . We empirically demonstrate the approach for topic modelling of text documents.


The Deployment-to-Saturation Ratio in Security Games

AAAI Conferences

Stackelberg security games form the backbone of systems like ARMOR, IRIS and PROTECT, which are in regular use by the Los Angeles International Police, US Federal Air Marshal Service and the US Coast Guard respectively. An understanding of the runtime required by algorithms that power such systems is critical to furthering the application of game theory to other real-world domains. This paper identifies the concept of the deployment-to-saturation ratio in random Stackelberg security games, and shows that problem instances for which this ratio is 0.5 are computationally harder than instances with other deployment-to-saturation ratios for a wide range of different equilibrium computation methods, including (i) previously published different MIP algorithms, and (ii) different underlying solvers and solution mechanisms. This finding has at least two important implications. First, it is important for new algorithms to be evaluated on the hardest problem instances. We show that this has often not been done in the past, and introduce a publicly available benchmark suite to facilitate such comparisons. Second, we provide evidence that this computationally hard region is also one where optimization would be of most benefit to security agencies, and thus requires significant attention from researchers in this area. Furthermore, we use the concept of phase transitions to better understand this computationally hard region. We define a decision problem related to security games, and show that the probability that this problem has a solution exhibits a phase transition as the deployment-to-saturation ratio crosses 0.5. We also demonstrate that this phase transition is invariant to changes both in the domain and the domain representation, and that the phase transition point corresponds to the computationally hardest instances.


Prediction and Fault Detection of Environmental Signals with Uncharacterised Faults

AAAI Conferences

Many signals of interest are corrupted by faults of anunknown type. We propose an approach that uses Gaus-sian processes and a general “fault bucket” to capturea priori uncharacterised faults, along with an approxi-mate method for marginalising the potential faultinessof all observations. This gives rise to an efficient, flexible algorithm for the detection and automatic correction of faults. Our method is deployed in the domain of water monitoring and management, where it is able to solve several fault detection, correction, and prediction problems. The method works well despite the fact that the data is plagued with numerous difficulties, including missing observations, multiple discontinuities, nonlinearity and many unanticipated types of fault.


Approximately Revenue-Maximizing Auctions for Deliberative Agents

AAAI Conferences

In many real-world auctions, a bidder does not know her exact value for an item, but can perform a costly deliberation to reduce her uncertainty. Relatively little is known about such deliberative environments, which are fundamentally different from classical auction environments. In this paper, we propose a new approach that allows us to leverage classical revenue-maximization results in deliberative environments. In particular, we use Myerson (1981) to construct the first non-trivial (i.e., dependent on deliberation costs) upper bound on revenue in deliberative auctions. This bound allows us to apply existing results in the classical environment to a deliberative environment. In addition, we show that in many deliberative environments the only optimal dominant-strategy mechanisms take the form of sequential posted-price auctions.


Predicting Satisfiability at the Phase Transition

AAAI Conferences

Uniform random 3-SAT at the solubility phase transition is one of the most widely studied and empirically hardest distributions of SAT instances. For 20 years, this distribution has been used extensively for evaluating and comparing algorithms. In this work, we demonstrate that simple rules can predict the solubility of these instances with surprisingly high accuracy. Specifically, we show how classification accuracies of about 70% can be obtained based on cheaply (polynomial-time) computable features on a wide range of instance sizes. We argue in two ways that classification accuracy does not decrease with instance size: first, we show that our models' predictive accuracy remains roughly constant across a wide range of problem sizes; second, we show that a classifier trained on small instances is sufficient to achieve very accurate predictions across the entire range of instance sizes currently solvable by complete methods. Finally, we demonstrate that a simple decision tree based on only two features, and again trained only on the smallest instances, achieves predictive accuracies close to those of our most complex model. We conjecture that this two-feature model outperforms random guessing asymptotically; due to the model's extreme simplicity, we believe that this conjecture is a worthwhile direction for future theoretical work.


Task Context for Knowledge Workers

AAAI Conferences

Knowledge workers work on many different tasks and must often switch between those tasks. In earlier work, we have shown the benefits of automatically capturing contexts for tasks for a specific category of knowledge worker, software programmers. Captured contexts facilitate task switches and reduce information overload by enabling the display of only the information relevant to the task-at-hand. In this paper, we describe the results of two studies of the use of captured contexts for a broad range of knowledge workers. The first study we describe is a field study of eight knowledge workers who used the model in their daily work for up to 25 days on tasks involving both file and web documents. We found that these knowledge workers need information to decay from their context and that our model is adequate at automatically trimming contexts. The second study is a case study of the use of contexts to support the operations of a software development company. We analyzed task contexts from hundreds of days of work from three users and found similar trends of information decaying from contexts. Results from each study also shed more light on the nature of mixed artifact task contexts.


Towards Optimal Patrol Strategies for Fare Inspection in Transit Systems

AAAI Conferences

In some urban transit systems, passengers are legally required to purchase tickets before entering but are not physically forced to do so. Instead, patrol units move about through the transit system, inspecting tickets of passengers, who face fines for fare evasion. This setting yields the problem of computing optimal patrol strategies satisfying certain temporal and spacial constraints, to deter fare evasion and hence maximize revenue. In this paper we propose an initial model of this problem as a leader-follower Stackelberg game. We then formulate an LP relaxation of this problem and present initial experimental results using real-world ridership data from the Los Angeles Metro Rail system.