Goto

Collaborating Authors

 total complexity




A Appendix: Proofs and Algorithms A.1 Proofs of results in Section 4 Proof of Proposition 4.1. Plug B

Neural Information Processing Systems

(Bertsekas, 1999). Algorithm 1. Furthermore, we call ˆ f (), X We can show that | f () ˆ f () |, 8 2 [, ] . Besides, computing the upper bound claimed in Proposition 4.2 requires finding The second equality is from the fact that the objective function is affine w.r.t. Finally, we verify the rest two components. Finally, we verify the rest two components. This finishes the proof of our claim.






Dear reviewers

Neural Information Processing Systems

We do this by using a new Lyapunov-style analysis. The setting of our paper is for strongly convex learning problems. We will make sure to clarify this in our final submission. We will include these new experiments in our final submission. We will also include this update on the improved theory in our final submission.


Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting

Nguyen, Duc Toan, Tran, Trang H., Nguyen, Lam M.

arXiv.org Artificial Intelligence

In this paper, we propose Adjusted Shuffling SARAH, a novel algorithm that integrates shuffling techniques with the well-known variance-reduced algorithm SARAH while dynamically adjusting the stochastic gradient weights in each update to enhance exploration. Our method achieves the best-known gradient complexity for shuffling variance reduction methods in a strongly convex setting. This result applies to any shuffling technique, which narrows the gap in the complexity analysis of variance reduction methods between uniform sampling and shuffling data. Furthermore, we introduce Inexact Adjusted Reshuffling SARAH, an inexact variant of Adjusted Shuffling SARAH that eliminates the need for full-batch gradient computations. This algorithm retains the same linear convergence rate as Adjusted Shuffling SARAH while showing an advantage in total complexity when the sample size is very large.


Exact discovery is polynomial for sparse causal Bayesian networks

Rios, Felix L., Moffa, Giusi, Kuipers, Jack

arXiv.org Machine Learning

Causal Bayesian networks are widely used tools for summarising the dependencies between variables and elucidating their putative causal relationships. Learning networks from data is computationally hard in general. The current state-of-the-art approaches for exact causal discovery are integer linear programming over the underlying space of directed acyclic graphs, dynamic programming and shortest-path searches over the space of topological orders, and constraint programming combining both. For dynamic programming over orders, the computational complexity is known to be exponential base 2 in the number of variables in the network. We demonstrate how to use properties of Bayesian networks to prune the search space and lower the computational cost, while still guaranteeing exact discovery. When including new path-search and divide-and-conquer criteria, we prove optimality in quadratic time for matchings, and polynomial time for any network class with logarithmically-bound largest connected components. In simulation studies we observe the polynomial dependence for sparse networks and that, beyond some critical value, the logarithm of the base grows with the network density. Our approach then out-competes the state-of-the-art at lower densities. These results therefore pave the way for faster exact causal discovery in larger and sparser networks.