Goto

Collaborating Authors

 UCLA


Recovering from Selection Bias in Causal and Statistical Inference

AAAI Conferences

Selection bias is caused by preferential exclusion of units from the samples and represents a major obstacle to valid causal and statistical inferences; it cannot be removed by randomized experiments and can rarely be detected in either experimental or observational studies. In this paper, we provide complete graphical and algorithmic conditions for recovering conditional probabilities from selection biased data. We also provide graphical conditions for recoverability when unbiased data is available over a subset of the variables. Finally, we provide a graphical condition that generalizes the backdoor criterion and serves to recover causal effects when the data is collected under preferential selection.


Optimal Rectangle Packing on Non-Square Benchmarks

AAAI Conferences

The rectangle packing problem consists of finding an enclosing rectangle of smallest area that can contain a given set of rectangles without overlap. We propose two new benchmarks, one where the orientation of the rectangles is fixed and one where it is free, that include rectangles of various aspect ratios. The new benchmarks avoid certain properties of easy instances, which we identify as instances where rectangles have dimensions in common or where a few rectangles occupy most of the area. Our benchmarks are much more difficult for the previous state-of-the-art solver, requiring orders of magnitude more time, compared to similar-sized instances from a popular benchmark consisting only of squares. On the new benchmarks, we improve upon the previous strategy used to handle dominance conditions, we define a variable order over non-square rectangles that generalizes previous strategies, and we present a way to adjust the sizes of intervals of values for each rectangle's x-coordinates. Using these techniques together, we can solve the new oriented benchmark about 500 times faster, and the new unoriented benchmark about 40 times faster than the previous state-of-the-art.


1.6-Bit Pattern Databases

AAAI Conferences

We present a new technique to compress pattern databases to provide consistent heuristics without loss of information. We store the heuristic estimate modulo three, requiring only two bits per entry or in a more compact representation, only 1.6 bits. This allows us to store a pattern database with more entries in the same amount of memory as an uncompressed pattern database. These compression techniques are most useful where lossy compression using cliques or their generalization is not possible or where adjacent entries in the pattern database are not highly correlated. We compare both techniques to the best existing compression methods for the Top-Spin puzzle, Rubik's cube, the 4-peg Towers of Hanoi problem, and the 24 puzzle. Under certain conditions, our best implementations for the Top-Spin puzzle and Rubik's cube outperform the respective state of the art solvers by a factor of four.


A Lower Bound on the Size of Decomposable Negation Normal Form

AAAI Conferences

We consider in this paper the size of a Decomposable Negation Normal Form (DNNF) that respects a given vtree (known as structured DNNF). This representation of propositional knowledge bases was introduced recently and shown to include OBDD as a special case (an OBDD variable ordering is a special type of vtree). We provide a lower bound on the size of any structured DNNF and discuss three particular instances of this bound, which correspond to three distinct subsets of structured DNNF (including OBDD). We show that our lower bound subsumes the influential Sieling and Wegener’s lower bound for OBDDs, which is typically used for identifying variable orderings that will cause a blowup in the OBDD size. We show that our lower bound allows for similar usage but with respect to vtrees, which provide structure for DNNFs in the same way that variable orderings provide structure for OBDDs. We finally discuss some of the theoretical and practical implications of our lower bound.


Independent Additive Heuristics Reduce Search Multiplicatively

AAAI Conferences

This paper analyzes the performance of IDA* using additive heuristics. We show that the reduction in the number of nodes expanded using multiple independent additive heuristics is the product of the reductions achieved by the individual heuristics. First, we formally state and prove this result on unit edge-cost undirected graphs with a uniform branching factor. Then, we empirically verify it on a model of the 4-peg Towers of Hanoi problem. We also run experiments on the multiple sequence alignment problem showing more general applicability to non-unit edge-cost directed graphs. Then, we extend an existing model to predict the performance of IDA* with a single pattern database to independent additive disjoint pattern databases. This is the first analysis of the performance of independent additive heuristics.