Not enough data to create a plot.
Try a different view from the menu above.
Country
Level Set Estimation from Compressive Measurements using Box Constrained Total Variation Regularization
Estimating the level set of a signal from measurements is a task that arises in a variety of fields, including medical imaging, astronomy, and digital elevation mapping. Motivated by scenarios where accurate and complete measurements of the signal may not available, we examine here a simple procedure for estimating the level set of a signal from highly incomplete measurements, which may additionally be corrupted by additive noise. The proposed procedure is based on box-constrained Total Variation (TV) regularization. We demonstrate the performance of our approach, relative to existing state-of-the-art techniques for level set estimation from compressive measurements, via several simulation examples.
Group Model Selection Using Marginal Correlations: The Good, the Bad and the Ugly
Bajwa, Waheed U., Mixon, Dustin G.
Group model selection is the problem of determining a small subset of groups of predictors (e.g., the expression data of genes) that are responsible for majority of the variation in a response variable (e.g., the malignancy of a tumor). This paper focuses on group model selection in high-dimensional linear models, in which the number of predictors far exceeds the number of samples of the response variable. Existing works on high-dimensional group model selection either require the number of samples of the response variable to be significantly larger than the total number of predictors contributing to the response or impose restrictive statistical priors on the predictors and/or nonzero regression coefficients. This paper provides comprehensive understanding of a low-complexity approach to group model selection that avoids some of these limitations. The proposed approach, termed Group Thresholding (GroTh), is based on thresholding of marginal correlations of groups of predictors with the response variable and is reminiscent of existing thresholding-based approaches in the literature. The most important contribution of the paper in this regard is relating the performance of GroTh to a polynomial-time verifiable property of the predictors for the general case of arbitrary (random or deterministic) predictors and arbitrary nonzero regression coefficients.
Annotating Answer-Set Programs in LANA?
De Vos, Marina, Kฤฑza, Doฤa Gizem, Oetsch, Johannes, Pรผhrer, Jรถrg, Tompits, Hans
While past research in answer-set programming (ASP) mainly focused on theory, ASP solver technology, and applications, the present work situates itself in the context of a quite recent research trend: development support for ASP. In particular, we propose to augment answer-set programs with additional meta-information formulated in a dedicated annotation language, called LANA. This language allows the grouping of rules into coherent blocks and to specify language signatures, types, pre- and postconditions, as well as unit tests for such blocks. While these annotations are invisible to an ASP solver, as they take the form of program comments, they can be interpreted by tools for documentation, testing, and verification purposes, as well as to eliminate sources of common programming errors by realising syntax checking or code completion features. To demonstrate its versatility, we introduce two such tools, viz. (i) ASPDOC, for generating an HTML documentation for a program based on the annotated information, and (ii) ASPUNIT, for running and monitoring unit tests on program blocks. LANA is also exploited in the SeaLion system, an integrated development environment for ASP based on Eclipse. To appear in Theory and Practice of Logic Programming
Disjunctive Datalog with Existential Quantifiers: Semantics, Decidability, and Complexity Issues
Alviano, Mario, Faber, Wolfgang, Leone, Nicola, Manna, Marco
Datalog is one of the best-known rule-based languages, and extensions of it are used in a wide context of applications. An important Datalog extension is Disjunctive Datalog, which significantly increases the expressivity of the basic language. Disjunctive Datalog is useful in a wide range of applications, ranging from Databases (e.g., Data Integration) to Artificial Intelligence (e.g., diagnosis and planning under incomplete knowledge). However, in recent years an important shortcoming of Datalog-based languages became evident, e.g. in the context of data-integration (consistent query-answering, ontology-based data access) and Semantic Web applications: The language does not permit any generation of and reasoning with unnamed individuals in an obvious way. In general, it is weak in supporting many cases of existential quantification. To overcome this problem, Datalogex has recently been proposed, which extends traditional Datalog by existential quantification in rule heads. In this work, we propose a natural extension of Disjunctive Datalog and Datalogex, called Datalogexor, which allows both disjunctions and existential quantification in rule heads and is therefore an attractive language for knowledge representation and reasoning, especially in domains where ontology-based reasoning is needed. We formally define syntax and semantics of the language Datalogexor, and provide a notion of instantiation, which we prove to be adequate for Datalogexor. A main issue of Datalogex and hence also of Datalogexor is that decidability is no longer guaranteed for typical reasoning tasks. In order to address this issue, we identify many decidable fragments of the language, which extend, in a natural way, analog classes defined in the non-disjunctive case. Moreover, we carry out an in-depth complexity analysis, deriving interesting results which range from Logarithmic Space to Exponential Time.
Mining Permission Request Patterns from Android and Facebook Applications (extended author version)
Frank, Mario, Dong, Ben, Felt, Adrienne Porter, Song, Dawn
Android and Facebook provide third-party applications with access to users' private data and the ability to perform potentially sensitive operations (e.g., post to a user's wall or place phone calls). As a security measure, these platforms restrict applications' privileges with permission systems: users must approve the permissions requested by applications before the applications can make privacy- or security-relevant API calls. However, recent studies have shown that users often do not understand permission requests and lack a notion of typicality of requests. As a first step towards simplifying permission systems, we cluster a corpus of 188,389 Android applications and 27,029 Facebook applications to find patterns in permission requests. Using a method for Boolean matrix factorization for finding overlapping clusters, we find that Facebook permission requests follow a clear structure that exhibits high stability when fitted with only five clusters, whereas Android applications demonstrate more complex permission requests. We also find that low-reputation applications often deviate from the permission request patterns that we identified for high-reputation applications suggesting that permission request patterns are indicative for user satisfaction or application quality.
Simulated Tom Thumb, the Rule Of Thumb for Autonomous Robots
El-Dosuky, M. A., Rashad, M. Z., Hamza, T. T., EL-Bassiouny, A. H.
For a mobile robot to be truly autonomous, it must solve the simultaneous localization and mapping (SLAM) problem. We develop a new metaheuristic algorithm called Simulated Tom Thumb (STT), based on the detailed adventure of the clever Tom Thumb and advances in researches relating to path planning based on potential functions. Investigations show that it is very promising and could be seen as an optimization of the powerful solution of SLAM with data association and learning capabilities. STT outperform JCBB. The performance is 100 % match.
On the Sample Complexity of Predictive Sparse Coding
Mehta, Nishant A., Gray, Alexander G.
The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding algorithms recently have demonstrated impressive performance on a variety of supervised tasks, but their generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, covering two settings: 1) the overcomplete setting, where the number of features k exceeds the original dimensionality d; and 2) the high or infinite-dimensional setting, where only dimension-free bounds are useful. Both learning bounds intimately depend on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we first present a fundamental stability result for the LASSO, a result characterizing the stability of the sparse codes with respect to perturbations to the dictionary. In the overcomplete setting, we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with respect to d and k. In the high or infinite-dimensional setting, we show a dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and s, where s is an upper bound on the number of non-zeros in the sparse code for any training data point.
Probability Bracket Notation, Multivariable Systems and Static Bayesian Networks
Probability Bracket Notation (PBN) is applied to systems of multiple random variables for preliminary study of static Bayesian Networks (BN) and Probabilistic Graphic Models (PGM). The famous Student BN Example is explored to show the local independences and reasoning power of a BN. Software package Elvira is used to graphically display the student BN. Our investigation shows that PBN provides a consistent and convenient alternative to manipulate many expressions related to joint, marginal and conditional probability distributions in static BN.
Telling Interactive Player-specific Stories and Planning for It: ASD + PaSSAGE = PAST
Ramirez, Alejandro Jose (University of Alberta) | Bulitko, Vadim (University of Alberta)
Around the same time, a system called Player-Specific From Shakespeare's "Romeo and Juliet" to George Lucas' Stories via Automatically Generated Events (PaSSAGE) "Star Wars" to BioWare's "Jade Empire" to campfire stories (Thue et al. 2007) was proposed, which used AI techniques to baseball commentary, story-telling is a fundamental to model the player as he/she experiences a narrative-rich part of entertainment. A strong narrative resonates with our video game. Such a continuously updated player model was minds, hearts and souls and keeps us engaged. We remember used to dynamically adapt the story, tailoring it to the current the stories of our childhood and retell them to our own player. Unlike, ASD, PaSSAGE did not have any automation children. Story-telling has delighted and saddened the human at the design stage and relied on a human designer to race since the beginning of time and shows no signs of foresee all possible ways of a player breaking the story and slowing down. But can it be improved with technology?
Spatial Game Signatures for Bot Detection in Social Games
Barik, Titus (North Carolina State University) | Harrison, Brent (North Carolina State University) | Roberts, David L. (North Carolina State University) | Jiang, Xuxian (North Carolina State University)
Bot detection is an emerging problem in social games that requires different approaches from those used in massively multi-player online games (MMOGs). We focus on mouse selections as a key element of bot detection. We hypothesize that certain interface elements result in predictable differences in mouse selections, which we call spatial game signatures, and that those signatures can be used to model player interactions that are specific to the game mechanics and game interface. We performed a study in which users played a game representative of social games. We collected in-game actions, from which we empirically identified these signatures, and show that these signatures result in a viable approach to bot detection. We make three contributions. First, we introduce the idea of spatial game signatures. Second, we show that the assumption that mouse clicks are normally distributed about the center of buttons is not true for every interface element. Finally, we provide methodologies for using spatial game signatures for bot detection.