Ryzhakov, Gleb
Black-Box Approximation and Optimization with Hierarchical Tucker Decomposition
Ryzhakov, Gleb, Chertkov, Andrei, Basharin, Artem, Oseledets, Ivan
Storing such a tensor often requires too much computational effort, and for large values of the dimension d, this is We develop a new method HTBB for the multidimensional completely impossible due to the so-called curse of dimensionality black-box approximation and gradientfree (the memory for storing data and the complexity optimization, which is based on the low-rank of working with it grows exponentially in d). To overcome hierarchical Tucker decomposition with the use it, various compression formats for multidimensional tensors of the MaxVol indices selection procedure. Numerical are proposed: Canonical Polyadic decomposition aka experiments for 14 complex model problems CANDECOMP/PARAFAC (CPD) (Harshman et al., 1970), demonstrate the robustness of the proposed Tucker decomposition (Tucker, 1966), Tensor Train (TT) method for dimensions up to 1000, while it shows decomposition (Oseledets, 2011), Hierarchical Tucker (HT) significantly more accurate results than classical decomposition (Hackbusch & Kühn, 2009; Ballani et al., gradient-free optimization methods, as well as 2013), and their various modifications. These approaches approximation and optimization methods based make it possible to approximately represent the tensor in on the popular tensor train decomposition, which a compact low-rank (i.e., low-parameter) format and then represents a simpler case of a tensor network.
Smart Flow Matching: On The Theory of Flow Matching Algorithms with Applications
Ryzhakov, Gleb, Pavlova, Svetlana, Sevriugov, Egor, Oseledets, Ivan
The paper presents the exact formula for the vector field that minimizes the loss for the standard flow. This formula depends analytically on a given distribution \rho_0 and an unknown one \rho_1. Based on the presented formula, a new loss and algorithm for training a vector field model in the style of Conditional Flow Matching are provided. Our loss, in comparison to the standard Conditional Flow Matching approach, exhibits smaller variance when evaluated through Monte Carlo sampling methods. Numerical experiments on synthetic models and models on tabular data of large dimensions demonstrate better learning results with the use of the presented algorithm.