Goto

Collaborating Authors

 Computational Learning Theory


Compressive Feature Learning

Neural Information Processing Systems

This paper addresses the problem of unsupervised feature learning for text data. Our method is grounded in the principle of minimum description length and uses a dictionary-based compression scheme to extract a succinct feature set. Specifically, our method finds a set of word $k$-grams that minimizes the cost of reconstructing the text losslessly. We formulate document compression as a binary optimization task and show how to solve it approximately via a sequence of reweighted linear programs that are efficient to solve and parallelizable. As our method is unsupervised, features may be extracted once and subsequently used in a variety of tasks. We demonstrate the performance of these features over a range of scenarios including unsupervised exploratory analysis and supervised text categorization. Our compressed feature space is two orders of magnitude smaller than the full $k$-gram space and matches the text categorization accuracy achieved in the full feature space. This dimensionality reduction not only results in faster training times, but it can also help elucidate structure in unsupervised learning tasks and reduce the amount of training data necessary for supervised learning.


Auditing: Active Learning with Outcome-Dependent Query Costs

Neural Information Processing Systems

We propose a learning setting in which unlabeled data is free, and the cost of a label depends on its value, which is not known in advance. We study binary classification in an extreme case, where the algorithm only pays for negative labels. Our motivation are applications such as fraud detection, in which investigating an honest transaction should be avoided if possible. We term the setting auditing, and consider the auditing complexity of an algorithm: The number of negative points it labels to learn a hypothesis with low relative error. We design auditing algorithms for thresholds on the line and axis-aligned rectangles, and show that with these algorithms, the auditing complexity can be significantly lower than the active label complexity. We discuss a general approach for auditing for a general hypothesis class, and describe several interesting directions for future work.


Predictive PAC Learning and Process Decompositions

arXiv.org Machine Learning

We informally call a stochastic process learnable if it admits a generalization error approaching zero in probability for any concept class with finite VC-dimension (IID processes are the simplest example). A mixture of learnable processes need not be learnable itself, and certainly its generalization error need not decay at the same rate. In this paper, we argue that it is natural in predictive PAC to condition not on the past observations but on the mixture component of the sample path. This definition not only matches what a realistic learner might demand, but also allows us to sidestep several otherwise grave problems in learning from dependent data. In particular, we give a novel PAC generalization bound for mixtures of learnable processes with a generalization error that is not worse than that of each mixture component. We also provide a characterization of mixtures of absolutely regular ($\beta$-mixing) processes, of independent probability-theoretic interest.


The Fractal Dimension of SAT Formulas

arXiv.org Artificial Intelligence

Modern SAT solvers have experienced a remarkable progress on solving industrial instances. Most of the techniques have been developed after an intensive experimental testing process. Recently, there have been some attempts to analyze the structure of these formulas in terms of complex networks, with the long-term aim of explaining the success of these SAT solving techniques, and possibly improving them. We study the fractal dimension of SAT formulas, and show that most industrial families of formulas are self-similar, with a small fractal dimension. We also show that this dimension is not affected by the addition of learnt clauses. We explore how the dimension of a formula, together with other graph properties can be used to characterize SAT instances. Finally, we give empirical evidence that these graph properties can be used in state-of-the-art portfolios.


Extended Distributed Learning Automata:A New Method for Solving Stochastic Graph Optimization Problems

arXiv.org Artificial Intelligence

In this paper, a new structure of cooperative learning automata so-called extended learning automata (eDLA) is introduced. Based on the proposed structure, a new iterative randomized heuristic algorithm for finding optimal sub-graph in a stochastic edge-weighted graph through sampling is proposed. It has been shown that the proposed algorithm based on new networked-structure can be to solve the optimization problems on stochastic graph through less number of sampling in compare to standard sampling. Stochastic graphs are graphs in which the edges have an unknown distribution probability weights. Proposed algorithm uses an eDLA to find a policy that leads to an induced sub-graph that satisfies some restrictions such as minimum or maximum weight (length). At each stage of the proposed algorithm, eDLA determines which edges to be sampled. This eDLA-based proposed sampling method may result in decreasing unnecessary samples and hence decreasing the time that algorithm requires for finding the optimal sub-graph. It has been shown that proposed method converge to optimal solution, furthermore the probability of this convergence can be made arbitrarily close to 1 by using a sufficiently small learning rate. A new variance-aware threshold value was proposed that can be improving significantly convergence rate of the proposed eDLA-based algorithm. It has been shown that the proposed algorithm is competitive in terms of the quality of the solution


Resolution and Parallelizability: Barriers to the Efficient Parallelization of SAT Solvers

AAAI Conferences

Recent attempts to create versions of Satisfiability (SAT) solversthat exploit parallel hardware and information sharing have met withlimited success. In fact,the most successful parallel solvers in recent competitions were basedon portfolio approaches with little to no exchange of informationbetween processors. This experience contradicts the apparentparallelizability of exploring a combinatorial search space. Wepresent evidence that this discrepancy can be explained by studyingSAT solvers through a proof complexity lens, as resolution refutationengines. Starting with theobservation that a recently studied measure of resolution proofs,namely depth, provides a (weak) upper bound to the best possiblespeedup achievable by such solvers, we empirically show the existenceof bottlenecks to parallelizability that resolution proofs typicallygenerated by SAT solvers exhibit. Further, we propose a new measureof parallelizability based on the best-case makespan of an offlineresource constrained scheduling problem. This measureexplicitly accounts for a bounded number of parallel processors andappears to empirically correlate with parallel speedups observed inpractice. Our findings suggest that efficient parallelization of SATsolvers is not simply a matter of designing the right clause sharingheuristics; even in the best case, it can be --- and indeed is ---hindered by the structure of the resolution proofs current SAT solverstypically produce.


A Simple, but NP-Hard, Motion Planning Problem

AAAI Conferences

Determining the existence of a collision-free path between two points is one of the most fundamental questions in robotics. ย However, in situations where crossing an obstacle is costly but not impossible, it may be more appropriate to ask for the path that crosses the fewest obstacles. ย This may arise in both autonomous outdoor navigation (where the obstacles are rough but not completely impassable terrain) or indoor navigation (where the obstacles are doors that can be opened if necessary). ย This problem, the minimum constraint removal problem , is at least as hard as the underlying path existence problem. ย In this paper, we demonstrate that the minimum constraint removal problem is NP-hard for navigation in the plane even when the obstacles are all convex polygons, a case where the path existence problem is very easy.


Measurements of collective machine intelligence

arXiv.org Artificial Intelligence

Independent from the still ongoing research in measuring individual intelligence, we anticipate and provide a framework for measuring collective intelligence. Collective intelligence refers to the idea that several individuals can collaborate in order to achieve high levels of intelligence. We present thus some ideas on how the intelligence of a group can be measured and simulate such tests. We will however focus here on groups of artificial intelligence agents (i.e., machines). We will explore how a group of agents is able to choose the appropriate problem and to specialize for a variety of tasks. This is a feature which is an important contributor to the increase of intelligence in a group (apart from the addition of more agents and the improvement due to common decision making). Our results reveal some interesting results about how (collective) intelligence can be modeled, about how collective intelligence tests can be designed and about the underlying dynamics of collective intelligence. As it will be useful for our simulations, we provide also some improvements of the threshold allocation model originally used in the area of swarm intelligence but further generalized here.


Loss-Proportional Subsampling for Subsequent ERM

arXiv.org Machine Learning

We propose a sampling scheme suitable for reducing a data set prior to selecting a hypothesis with minimum empirical risk. The sampling only considers a subset of the ultimate (unknown) hypothesis set, but can nonetheless guarantee that the final excess risk will compare favorably with utilizing the entire original data set. We demonstrate the practical benefits of our approach on a large dataset which we subsample and subsequently fit with boosted trees.


Extending Modern SAT Solvers for Enumerating All Models

arXiv.org Artificial Intelligence

In this paper, we address the problem of enumerating all models of a Boolean formula in conjunctive normal form (CNF). We propose an extension of CDCL-based SAT solvers to deal with this fundamental problem. Then, we provide an experimental evaluation of our proposed SAT model enumeration algorithms on both satisfiable SAT instances taken from the last SAT challenge and on instances from the SAT-based encoding of sequence mining problems.