learning axis-aligned rectangle
On the Sample Complexity of Privately Learning Axis-Aligned Rectangles
We revisit the fundamental problem of learning Axis-Aligned-Rectangles over a finite grid X d\subseteq\mathbb{R} d with differential privacy. That is, existing constructions either require sample complexity that grows linearly with \log X, or else it grows super linearly with the dimension d . We present a novel algorithm that reduces the sample complexity to only \tilde{O}\left\{d{\cdot}\left(\log * X \right) {1.5}\right\}, attaining a dimensionality optimal dependency without requiring the sample complexity to grow with \log X . The technique used in order to attain this improvement involves the deletion of "exposed" data-points on the go, in a fashion designed to avoid the cost of the adaptive composition theorems.The core of this technique may be of individual interest, introducing a new method for constructing statistically-efficient private algorithms.