Goto

Collaborating Authors

More data means less inference: A pseudo-max approach to structured learning

Neural Information Processing Systems

The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is possible to circumvent this difficulty when the input distribution is rich enough via a method similar in spirit to pseudo-likelihood. We show how our new method achieves consistency, and illustrate empirically that it indeed performs as well as exact methods when sufficiently large training sets are used. Papers published at the Neural Information Processing Systems Conference.


New Encoding for Translating Pseudo-Boolean Constraints into SAT

AAAI Conferences

A Pseudo-Boolean (PB) constraint is a linear arithmetic constraint over Boolean variables. PB constraints are convenient and widely used in expressing NP-complete problems. In this paper, we introduce a new, two-step, method for transforming PB constraints to propositional CNF formulas. The first step involves re-writing each PB constraint as a conjunction of PB-Mod constraints. The advantage is that PB-Mod constraints are easier to transform to CNF. In the second step, we translate each PB-Mod constraints, obtained in the previous step, into CNF. The resulting CNF formulas are small, and unit propagation can derive facts that it cannot derive using in the CNF formulas obtained by other commonly-used transformations. The Number Partitioning Problem, NPP, asks to decide whether a given set of integers can be partitioned into two subsets S and T such that the sum of the numbers in S equals the sum of the numbers in T. Expressing an instance of NPP as a PB-constraint is straight-forward. We used NPP as the benchmark in our experiments. The results show that our proposed methods outperform the other SAT-based encodings when the coefficients are large enough.


A New Look at BDDs for Pseudo-Boolean Constraints

Journal of Artificial Intelligence Research

Pseudo-Boolean constraints are omnipresent in practical applications, and thus a significant effort has been devoted to the development of good SAT encoding techniques for them. Some of these encodings first construct a Binary Decision Diagram (BDD) for the constraint, and then encode the BDD into a propositional formula. These BDD-based approaches have some important advantages, such as not being dependent on the size of the coefficients, or being able to share the same BDD for representing many constraints. We first focus on the size of the resulting BDDs, which was considered to be an open problem in our research community. We report on previous work where it was proved that there are Pseudo-Boolean constraints for which no polynomial BDD exists. We also give an alternative and simpler proof assuming that NP is different from Co-NP. More interestingly, here we also show how to overcome the possible exponential blowup of BDDs by \emph{coefficient decomposition}. This allows us to give the first polynomial generalized arc-consistent ROBDD-based encoding for Pseudo-Boolean constraints. Finally, we focus on practical issues: we show how to efficiently construct such ROBDDs, how to encode them into SAT with only 2 clauses per node, and present experimental results that confirm that our approach is competitive with other encodings and state-of-the-art Pseudo-Boolean solvers.


Online convex optimization for cumulative constraints

Neural Information Processing Systems

We propose the algorithms for online convex optimization which lead to cumulative squared constraint violations of the form $\sum\limits_{t=1}^T\big([g(x_t)]_+\big)^2=O(T^{1-\beta})$, where $\beta\in(0,1)$. Previous literature has focused on long-term constraints of the form $\sum\limits_{t=1}^Tg(x_t)$. There, strictly feasible solutions can cancel out the effects of violated constraints. In contrast, the new form heavily penalizes large constraint violations and cancellation effects cannot occur. Furthermore, useful bounds on the single step constraint violation $[g(x_t)]_+$ are derived. For convex objectives, our regret bounds generalize existing bounds, and for strongly convex objectives we give improved regret bounds. In numerical experiments, we show that our algorithm closely follows the constraint boundary leading to low cumulative violation.


Online Convex Optimization with Stochastic Constraints

Neural Information Processing Systems

This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special case, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. To solve this problem, this paper proposes a new algorithm that achieves $O(\sqrt{T})$ expected regret and constraint violations and $O(\sqrt{T}\log(T))$ high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm.