Goto

Collaborating Authors

Computational Learning Theory


Data Science & Machine Learning(Theory+Projects)A-Z 90 HOURS

#artificialintelligence

Electrification was, without a doubt, the greatest engineering marvel of the 20th century. The electric motor was invented way back in 1821, and the electrical circuit was mathematically analyzed in 1827. But factory electrification, household electrification, and railway electrification all started slowly several decades later. The field of AI was formally founded in 1956. But it's only now--more than six decades later--that AI is expected to revolutionize the way humanity will live and work in the coming decades.


Computing Complexity-aware Plans Using Kolmogorov Complexity

arXiv.org Artificial Intelligence

In this paper, we introduce complexity-aware planning for finite-horizon deterministic finite automata with rewards as outputs, based on Kolmogorov complexity. Kolmogorov complexity is considered since it can detect computational regularities of deterministic optimal policies. We present a planning objective yielding an explicit trade-off between a policy's performance and complexity. It is proven that maximising this objective is non-trivial in the sense that dynamic programming is infeasible. We present two algorithms obtaining low-complexity policies, where the first algorithm obtains a low-complexity optimal policy, and the second algorithm finds a policy maximising performance while maintaining local (stage-wise) complexity constraints. We evaluate the algorithms on a simple navigation task for a mobile robot, where our algorithms yield low-complexity policies that concur with intuition.


Discovering Useful Compact Sets of Sequential Rules in a Long Sequence

arXiv.org Artificial Intelligence

We are interested in understanding the underlying generation process for long sequences of symbolic events. To do so, we propose COSSU, an algorithm to mine small and meaningful sets of sequential rules. The rules are selected using an MDL-inspired criterion that favors compactness and relies on a novel rule-based encoding scheme for sequences. Our evaluation shows that COSSU can successfully retrieve relevant sets of closed sequential rules from a long sequence. Such rules constitute an interpretable model that exhibits competitive accuracy for the tasks of next-element prediction and classification.


Learning and Decision-Making with Data: Optimal Formulations and Phase Transitions

arXiv.org Machine Learning

We study the problem of designing optimal learning and decision-making formulations when only historical data is available. Prior work typically commits to a particular class of data-driven formulation and subsequently tries to establish out-of-sample performance guarantees. We take here the opposite approach. We define first a sensible yard stick with which to measure the quality of any data-driven formulation and subsequently seek to find an optimal such formulation. Informally, any data-driven formulation can be seen to balance a measure of proximity of the estimated cost to the actual cost while guaranteeing a level of out-of-sample performance. Given an acceptable level of out-of-sample performance, we construct explicitly a data-driven formulation that is uniformly closer to the true cost than any other formulation enjoying the same out-of-sample performance. We show the existence of three distinct out-of-sample performance regimes (a superexponential regime, an exponential regime and a subexponential regime) between which the nature of the optimal data-driven formulation experiences a phase transition. The optimal data-driven formulations can be interpreted as a classically robust formulation in the superexponential regime, an entropic distributionally robust formulation in the exponential regime and finally a variance penalized formulation in the subexponential regime. This final observation unveils a surprising connection between these three, at first glance seemingly unrelated, data-driven formulations which until now remained hidden.


On Tilted Losses in Machine Learning: Theory and Applications

arXiv.org Machine Learning

Exponential tilting is a technique commonly used in fields such as statistics, probability, information theory, and optimization to create parametric distribution shifts. Despite its prevalence in related fields, tilting has not seen widespread use in machine learning. In this work, we aim to bridge this gap by exploring the use of tilting in risk minimization. We study a simple extension to ERM -- tilted empirical risk minimization (TERM) -- which uses exponential tilting to flexibly tune the impact of individual losses. The resulting framework has several useful properties: We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; has variance-reduction properties that can benefit generalization; and can be viewed as a smooth approximation to a superquantile method. Our work makes rigorous connections between TERM and related objectives, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization (DRO). We develop batch and stochastic first-order optimization methods for solving TERM, provide convergence guarantees for the solvers, and show that the framework can be efficiently solved relative to common alternatives. Finally, we demonstrate that TERM can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance. Despite the straightforward modification TERM makes to traditional ERM objectives, we find that the framework can consistently outperform ERM and deliver competitive performance with state-of-the-art, problem-specific approaches.


On Dedicated CDCL Strategies for PB Solvers

arXiv.org Artificial Intelligence

Current implementations of pseudo-Boolean (PB) solvers working on native PB constraints are based on the CDCL architecture which empowers highly efficient modern SAT solvers. In particular, such PB solvers not only implement a (cutting-planes-based) conflict analysis procedure, but also complementary strategies for components that are crucial for the efficiency of CDCL, namely branching heuristics, learned constraint deletion and restarts. However, these strategies are mostly reused by PB solvers without considering the particular form of the PB constraints they deal with. In this paper, we present and evaluate different ways of adapting CDCL strategies to take the specificities of PB constraints into account while preserving the behavior they have in the clausal setting. We implemented these strategies in two different solvers, namely Sat4j (for which we consider three configurations) and RoundingSat. Our experiments show that these dedicated strategies allow to improve, sometimes significantly, the performance of these solvers, both on decision and optimization problems.


UW to lead new NSF institute for using artificial intelligence to understand dynamic systems

#artificialintelligence

The UW will lead a new artificial intelligence research institute that will focus on fundamental AI and machine learning theory, algorithms and applications for real-time learning and control of complex dynamic systems, which describe chaotic situations where conditions are constantly shifting and hard to predict.Andy Freeberg/University of Washington The U.S. National Science Foundation today announced 11 new artificial-intelligence research institutes, including one led by the University of Washington. These institutes are part of a $220 million investment spanning seven research areas in AI. Each institute will receive about $20 million over five years. The UW-led AI Institute for Dynamic Systems will focus on fundamental AI and machine learning theory, algorithms and applications for real-time learning and control of complex dynamic systems, which describe chaotic situations where conditions are constantly shifting and hard to predict. "The engineering sciences are undergoing a revolution that is aided by machine learning and AI algorithms," said institute director J. Nathan Kutz, a UW professor of applied mathematics.


Efficient Algorithms for Learning from Coarse Labels

arXiv.org Machine Learning

For many learning problems one may not have access to fine grained label information; e.g., an image can be labeled as husky, dog, or even animal depending on the expertise of the annotator. In this work, we formalize these settings and study the problem of learning from such coarse data. Instead of observing the actual labels from a set $\mathcal{Z}$, we observe coarse labels corresponding to a partition of $\mathcal{Z}$ (or a mixture of partitions). Our main algorithmic result is that essentially any problem learnable from fine grained labels can also be learned efficiently when the coarse data are sufficiently informative. We obtain our result through a generic reduction for answering Statistical Queries (SQ) over fine grained labels given only coarse labels. The number of coarse labels required depends polynomially on the information distortion due to coarsening and the number of fine labels $|\mathcal{Z}|$. We also investigate the case of (infinitely many) real valued labels focusing on a central problem in censored and truncated statistics: Gaussian mean estimation from coarse data. We provide an efficient algorithm when the sets in the partition are convex and establish that the problem is NP-hard even for very simple non-convex sets.


Evolving Digital Circuits for the Knapsack Problem

arXiv.org Artificial Intelligence

Multi Expression Programming (MEP) is a Genetic Programming variant that uses linear chromosomes for solution encoding. A unique feature of MEP is its ability of encoding multiple solutions of a problem in a single chromosome. In this paper we use Multi Expression Programming for evolving digital circuits for a well-known NP-Complete problem: the knapsack (subset sum) problem. Numerical experiments show that Multi Expression Programming performs well on the considered test problems.


Bounding the Sample Size of a Machine Learning Algorithm

#artificialintelligence

One common problem with machine learning algorithms is that we don't know how much training data we need. A common way around this is the often used strategy: keep training until the training error stops decreasing. However, there are still issues with this. How do we know we're not stuck in a local minimum? What if the training error has strange behavior, sometimes staying flat over training iterations but sometimes decreasing sharply?