Multi-Agent Subset Space Logic

AAAI Conferences

Subset space logics have been introduced and studied as a framework for reasoning about a notion of effort in epistemic logic. The seminal Subset Space Logic (SSL) by Moss and Parikh modeled a single agent, and most work in this area has focused on different extensions of the language, or different model classes resulting from restrictions on subset spaces, while still keeping the single-agent assumption. In this paper we argue that the few existing attempts at multi-agent versions of SSL are unsatisfactory, and propose a new multi-agent subset space logic which is a natural extension of single-agent SSL. The main results are a sound and complete axiomatization of this logic, as well as an alternative and equivalent relational semantics.


Better subset regression

arXiv.org Machine Learning

To find efficient screening methods for high dimensional linear regression models, this paper studies the relationship between model fitting and screening performance. Under a sparsity assumption, we show that a subset that includes the true submodel always yields smaller residual sum of squares (i.e., has better model fitting) than all that do not in a general asymptotic setting. This indicates that, for screening important variables, we could follow a "better fitting, better screening" rule, i.e., pick a "better" subset that has better model fitting. To seek such a better subset, we consider the optimization problem associated with best subset regression. An EM algorithm, called orthogonalizing subset screening, and its accelerating version are proposed for searching for the best subset. Although the two algorithms cannot guarantee that a subset they yield is the best, their monotonicity property makes the subset have better model fitting than initial subsets generated by popular screening methods, and thus the subset can have better screening performance asymptotically. Simulation results show that our methods are very competitive in high dimensional variable screening even for finite sample sizes.


Weighted A* Algorithms for Unsupervised Feature Selection with Provable Bounds on Suboptimality

AAAI Conferences

Identifying a small number of features that can represent the data is believed to be NP-hard. Previous approaches exploit algebraic structure and use randomization. We propose an algorithm based on ideas similar to the Weighted A* algorithm in heuristic search. Our experiments show this new algorithm to be more accurate than the current state of the art.


Data Science Simplified Part 6: Model Selection Methods

@machinelearnbot

In the last article of this series, we had discussed multivariate linear regression model. Fernando creates a model that estimates the price of the car based on five input parameters. Fernando indeed has a better model. Yet, he wanted to select the best set of variables for input. The idea of model selection method is intuitive.


Data Science Simplified Part 6: Model Selection Methods

@machinelearnbot

In the last article of this series, we had discussed multivariate linear regression model. Fernando creates a model that estimates the price of the car based on five input parameters.