On the Approximation of Bi-Lipschitz Maps by Invertible Neural Networks

Jin, Bangti, Zhou, Zehui, Zou, Jun

arXiv.org Artificial Intelligence 

Invertible neural networks (INNs) are a class of neural network (NN) architectures with invertibility by design, via special invertible layers called the flow layers. INNs often enjoy tractable numerical algorithms to compute the inverse map and Jacobian determinant, e.g., with explicit inversion formulas. These distinct features have made them very attractive for a variety of machine learning tasks, e.g., generative modeling [16, 31, 29], probabilistic modeling [38, 17, 23, 6], solving inverse problems [2, 1, 3], modeling nonlinear dynamics [9] and point cloud generation [44]. There are several different classes of INNs, including invertible residual networks (iResNet) [7, 43], neural ordinary differential equations (NODEs) [11, 13, 18] and coupling-based neural networks [16, 17, 25, 31, 2]. For iResNet, Behrmann et al [7] leveraged the viewpoint of ResNets as an Euler discretization of ODEs and proved the standard ResNet architecture can be made invertible by adding a simple normalization step to control the Lipschitz constant of the NN during training. The inverse is not available in closed form but can be obtained through a fixed-point iteration. Chen et al [13] proposed using black-box ODE solvers as a model component, and developed a class of new models, i.e., NODEs, for time-series modeling, supervised learning, and density estimation etc. NODEs indirectly models an invertible function by transforming an input vector through an ordinary differential equation (ODE). Dupont and Doucet [18] introduced a class of more expressive and empirically stable models, augmented neural ODEs (ANODEs), which have a lower computational cost.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found