Goto

Collaborating Authors

University of New Mexico


Experimental Comparison of Online Anomaly Detection Algorithms

AAAI Conferences

Anomaly detection methods abound and are used extensively in streaming settings in a wide variety of domains. But a strength can also be a weakness; given the vast number of methods, how can one select the best method for their application? Unfortunately, there is no one best way for all domains. Existing literature is focused on creating new anomaly detection methods or creating large frameworks for experimenting with multiple methods at the same time. As the literature continues to grow, extensive evaluation of every available anomaly detection method is not feasible. To reduce this evaluation burden, in this paper we present a framework to intelligently choose the optimal anomaly detection methods based on the characteristics the time series displays. We provide a comprehensive experimental validation of multiple anomaly detection methods over different time series characteristics to form guidelines. Applying our framework can save time and effort by surfacing the most promising anomaly detection methods instead of experimenting extensively with a rapidly expanding library of anomaly detection methods.


Truthful and Near-Optimal Mechanisms for Welfare Maximization in Multi-Winner Elections

AAAI Conferences

Mechanisms for aggregating the preferences of agents in elections need to balance many different considerations, including efficiency, information elicited from agents, and manipulability. We consider the utilitarian social welfare of mechanisms for preference aggregation, measured by the distortion. We show that for a particular input format called threshold approval voting, where each agent is presented with an independently chosen threshold, there is a mechanism with nearly optimal distortion when the number of voters is large. Threshold mechanisms are potentially manipulable, but place a low informational burden on voters. We then consider truthful mechanisms. For the widely-studied class of ordinal mechanisms which elicit the rankings of candidates from each agent, we show that truthfulness essentially imposes no additional loss of welfare. We give truthful mechanisms with distortion O(√m log m) for k-winner elections, and distortion O(√m log m) when candidates have arbitrary costs, in elections with m candidates. These nearly match known lower bounds for ordinal mechanisms that ignore the strategic behavior. We further tighten these lower bounds and show that for truthful mechanisms our first upper bound is tight. Lastly, when agents decide between two candidates, we give tight bounds on the distortion for truthful mechanisms.


A Survey of Current Practice and Teaching of AI

AAAI Conferences

The field of AI has changed significantly in the past couple of years and will likely continue to do so. Driven by a desire to expose our students to relevant and modern materials, we conducted two surveys, one of AI instructors and one of AI practitioners. The surveys were aimed at gathering infor-mation about the current state of the art of introducing AI as well as gathering input from practitioners in the field on techniques used in practice. In this paper, we present and briefly discuss the responses to those two surveys.


Indefinite Scalability for Living Computation

AAAI Conferences

In a question-and-answer format, this summary paper presents background material for the AAAI-16 Senior Member Presentation Track “Blue Sky Ideas” talk of the same name.


Leveraging Domain Knowledge in Multitask Bayesian Network Structure Learning

AAAI Conferences

Network structure learning algorithms have aided network discovery in fields such as bioinformatics, neuroscience, ecology and social science. However, challenges remain in learning informative networks for related sets of tasks because the search space of Bayesian network structures is characterized by large basins of approximately equivalent solutions. Multitask algorithms select a set of networks that are near each other in the search space, rather than a score-equivalent set of networks chosen from independent regions of the space. This selection preference allows a domain expert to see only differences supported by the data. However, the usefulness of these algorithms for scientific datasets is limited because existing algorithms naively assume that all pairs of tasks are equally related. We introduce a framework that relaxes this assumption by incorporating domain knowledge about task-relatedness into the learning objective. Using our framework, we introduce the first multitask Bayesian network algorithm that leverages domain knowledge about the relatedness of tasks. We use our algorithm to explore the effect of task-relatedness on network discovery and show that our algorithm learns networks that are closer to ground truth than naive algorithms and that our algorithm discovers patterns that are interesting.


Enriching Chatter Bots With Semantic Conversation Control

AAAI Conferences

Businesses deploy chatter bots to engage in text-based conversations with customers that are intended resolve their issues. However, these chatter bots are only effective in exchanges consisting of question-answer pairs, where the context may switch with every pair. I am designing a semantic architecture that enables chatter bots to hold short conversations, where context is maintained throughout the exchange. I leverage specific ideas from conversation theory, speech acts theory, and knowledge representation. My architecture models a conversation as a stochastic process that flows through a set of states. The main contribution of this work is that it analyses and models the semantics of conversations as entities, instead of lower level grammatical and linguistics forms. I evaluate the performance of the architecture in accordance with Grice’s cooperative maxims, which form the central idea in the theory of pragmatics.


Frugal Coordinate Descent for Large-Scale NNLS

AAAI Conferences

The Nonnegative Least Squares (NNLS) formulation arises in many important regression problems. We present a novel coordinate descent method which differs from previous approaches in that we do not explicitly maintain complete gradient information. Empirical evidence shows that our approach outperforms a state-of-the-art NNLS solver in computation time for calculating radiation dosage for cancer treatment problems.


The Importance of Selective Knowledge Transfer for Lifelong Learning

AAAI Conferences

Versatile agents situated in rich, dynamic environments must It is not necessarily possible to select the source knowledge be capable of continually learning and refining their knowledge to transfer to a new target task by examining only the surface through experience. These agents will face a variety of similarities between the tasks. The selection must support learning tasks, and can transfer knowledge between tasks to the process of knowledge transfer by choosing source improve performance and accelerate learning. In this context, knowledge based on whether it will transfer well to the target a learning task can be as simple as discovering the effects task. In our previous work, we developed methods that of an operator on the environment, or as complex as accomplishing identify the source knowledge to transfer based on this concept a specific goal -- anything that can be learned of transferability to the target task. Intuitively, transferability can be considered a task. As the agent experiences and learns is the amount that the transferred information is a model for each task, it gains access to new data and knowledge.


Developing a Language for Spoken Programming

AAAI Conferences

The dominant paradigm for programming a computer today is text entry via keyboard and mouse, but there aremany common situations where this is not ideal. I address this through the creation of a new language thatis explicitly intended for spoken programming. In addition, I describe a supporting editor that improvesrecognition accuracy by making use of type information and scoping to increase recognizer context.


Leveraging Consensus and Divergence in Bayesian Belief Aggregation

AAAI Conferences

Many fields have a need to build representative or predictive models from a number of unique individuals who each can contribute their experience and beliefs to the whole. For instance, intelligence agencies may wish to build a model from a number of experts to analyze potential terrorist attacks. In addition, a sociological survey may want a model representing the beliefs of cultural or political groups. However, challenges remain that have limited the success of merging opinions to form consensus models. Our research in progress presents a new approach to combine, or aggregate the beliefs of many individuals using graphical models. Existing Bayesian belief aggregation methods utilize an opinion pool function to find a single consensus on a given probability distribution. These opinion pool functions have many theoretical problems including breaking several assumptions for Bayesian reasoning. More practically, existing opinion pool functions do not represent reality well, especially in cases of diverse opinions.