Goto

Collaborating Authors

 near-linear time algorithm


Near-Linear Time Algorithm for the Chamfer Distance

Neural Information Processing Systems

For any two point sets $A,B \subset \mathbb{R}^d$ of size up to $n$, the Chamfer distance from $A$ to $B$ is defined as $\texttt{CH}(A,B)=\sum_{a \in A} \min_{b \in B} d_X(a,b)$, where $d_X$ is the underlying distance measure (e.g., the Euclidean or Manhattan distance). The Chamfer distance is a popular measure of dissimilarity between point clouds, used in many machine learning, computer vision, and graphics applications, and admits a straightforward $O(d n^2)$-time brute force algorithm. Further, Chamfer distance is often used as a proxy for the more computationally demanding Earth-Mover (Optimal Transport) Distance. However, the \emph{quadratic} dependence on $n$ in the running time makes the naive approach intractable for large datasets.We overcome this bottleneck and present the first $(1+\epsilon)$-approximate algorithm for estimating Chamfer distance with a near-linear running time. Specifically, our algorithm runs in time $O(nd \log (n)/\epsilon^2)$ and is implementable. Our experiments demonstrate that it is both accurate and fast on large high-dimensional datasets. We believe that our algorithm will open new avenues for analyzing large high-dimensional point clouds. We also give evidence that if the goal is to report a $(1+\epsilon)$-approximate mapping from $A$ to $B$ (as opposed to just its value), then any sub-quadratic time algorithm is unlikely to exist.


Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization

Neural Information Processing Systems

We study differentially private (DP) algorithms for smooth stochastic minimax optimization, with stochastic minimization as a byproduct. The holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss, using an algorithm with a linear time-complexity in the number of training samples. We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off. Our framework is inspired from the recently proposed Phased-ERM method [22] for nonsmooth differentially private stochastic convex optimization (DP-SCO), which exploits the stability of the empirical risk minimization (ERM) for the privacy guarantee. The flexibility of our approach enables us to sidestep the requirement that the base algorithm needs to have bounded sensitivity, and allows the use of sophisticated variance-reduced accelerated methods to achieve near-linear time-complexity.


A Differentially Private Stochastic Convex Optimization In this section, we provide analyses of our near-linear time algorithms for DP-SCO with near-optimal

Neural Information Processing Systems

A.1 Supporting Lemmas In the phased algorithms for both convex minimization and convex-concave minimax problems, we By Definition 1 of ( ",) -differential privacy, the proof is complete. L can be used, e.g., see methods in [45, 20]. Here we only give a detailed proof of the regularized version. The proof of Lemma A.2 can be derived It is worth mentioning that Lemma A.3 does not require the Lipschitzness For the generalization error, we follow the standard results on stability and generalization. The proof of Theorem 3.3 that gives its guarantees is provided below. We first prove the privacy guarantee. Definition 2, Algorithm 1 is ( ",) -DP when setting =4 L p 2 log(2 .


Bring Y our Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization

Neural Information Processing Systems

We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off.


Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization

Neural Information Processing Systems

We study differentially private (DP) algorithms for smooth stochastic minimax optimization, with stochastic minimization as a byproduct. The holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss, using an algorithm with a linear time-complexity in the number of training samples. We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off. Our framework is inspired from the recently proposed Phased-ERM method [22] for nonsmooth differentially private stochastic convex optimization (DP-SCO), which exploits the stability of the empirical risk minimization (ERM) for the privacy guarantee. The flexibility of our approach enables us to sidestep the requirement that the base algorithm needs to have bounded sensitivity, and allows the use of sophisticated variance-reduced accelerated methods to achieve near-linear time-complexity.


Near-Linear Time Algorithm for the Chamfer Distance

Neural Information Processing Systems

For any two point sets A,B \subset \mathbb{R} d of size up to n, the Chamfer distance from A to B is defined as \texttt{CH}(A,B) \sum_{a \in A} \min_{b \in B} d_X(a,b), where d_X is the underlying distance measure (e.g., the Euclidean or Manhattan distance). The Chamfer distance is a popular measure of dissimilarity between point clouds, used in many machine learning, computer vision, and graphics applications, and admits a straightforward O(d n 2) -time brute force algorithm. Further, Chamfer distance is often used as a proxy for the more computationally demanding Earth-Mover (Optimal Transport) Distance. However, the \emph{quadratic} dependence on n in the running time makes the naive approach intractable for large datasets.We overcome this bottleneck and present the first (1 \epsilon) -approximate algorithm for estimating Chamfer distance with a near-linear running time. Specifically, our algorithm runs in time O(nd \log (n)/\epsilon 2) and is implementable. Our experiments demonstrate that it is both accurate and fast on large high-dimensional datasets.


Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization

Neural Information Processing Systems

We study differentially private (DP) algorithms for smooth stochastic minimax optimization, with stochastic minimization as a byproduct. The holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss, using an algorithm with a linear time-complexity in the number of training samples. We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off. Our framework is inspired from the recently proposed Phased-ERM method [22] for nonsmooth differentially private stochastic convex optimization (DP-SCO), which exploits the stability of the empirical risk minimization (ERM) for the privacy guarantee. The flexibility of our approach enables us to sidestep the requirement that the base algorithm needs to have bounded sensitivity, and allows the use of sophisticated variance-reduced accelerated methods to achieve near-linear time-complexity.