l-cnn
NSD-DIL: Null-Shot Deblurring Using Deep Identity Learning
S, Sree Rama Vamsidhar, Gorthi, Rama Krishna
In this paper, we propose to reformulate the blind image deblurring task to directly learn an inverse of the degradation model using a deep linear network. We introduce Deep Identity Learning (DIL), a novel learning strategy that includes a dedicated regularization term based on the properties of linear systems, to exploit the identity relation between the degradation and inverse degradation models. The salient aspect of our proposed framework is it neither relies on a deblurring dataset nor a single input blurred image (like Polyblur, a self-supervised method). Since it is purely image-data-independent, we term our model as Null-Shot deblurring Using Deep Identity Learning (NSD-DIL). We also provide an explicit representation of the learned deep linear network in a matrix form, called Deep Restoration Kernel (DRK) for deblurring task. The proposed framework detours the typical degradation kernel estimation step involved in most of the existing blind deblurring solutions by the proposition of our Random Kernel Gallery (RKG) dataset. In this work, we focus on the restoration of mild blur images, generated by small out-of-focus, lens blur, or slight camera motion, which often occurs in real images. Our experiments show that the proposed method outperforms both traditional and deep learning based deblurring methods, with at least an order of 100 lesser computational resources. The proposed NSD-DIL method can be effortlessly extended to the Image Super-Resolution (ISR) task as well to restore the low-resolution images with fine details. The NSD-DIL model and its kernel form representation (DRK) are lightweight yet robust and restore the mild blur input in a fraction of a second. Hence, more suitable for wide real-time applications.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Greece (0.04)
- Asia > India (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Fully Differentiable Lagrangian Convolutional Neural Network for Continuity-Consistent Physics-Informed Precipitation Nowcasting
Pavlík, Peter, Výboh, Martin, Ezzeddine, Anna Bou, Rozinajová, Viera
This paper presents a convolutional neural network model for precipitation nowcasting that combines data-driven learning with physics-informed domain knowledge. We propose LUPIN, a Lagrangian Double U-Net for Physics-Informed Nowcasting, that draws from existing extrapolation-based nowcasting methods and implements the Lagrangian coordinate system transformation of the data in a fully differentiable and GPU-accelerated manner to allow for real-time end-to-end training and inference. Based on our evaluation, LUPIN matches and exceeds the performance of the chosen benchmark, opening the door for other Lagrangian machine learning models.
- Europe > Czechia > South Moravian Region > Brno (0.04)
- Europe > Switzerland (0.04)
- Europe > Slovakia > Bratislava > Bratislava (0.04)
- Asia > Middle East > Jordan (0.04)
Fixed point actions from convolutional neural networks
Holland, Kieran, Ipp, Andreas, Müller, David I., Wenger, Urs
Lattice gauge-equivariant convolutional neural networks (L-CNNs) can be used to form arbitrarily shaped Wilson loops and can approximate any gauge-covariant or gauge-invariant function on the lattice. Here we use L-CNNs to describe fixed point (FP) actions which are based on renormalization group transformations. FP actions are classically perfect, i.e., they have no lattice artifacts on classical gauge-field configurations satisfying the equations of motion, and therefore possess scale invariant instanton solutions. FP actions are tree-level Symanzik-improved to all orders in the lattice spacing and can produce physical predictions with very small lattice artifacts even on coarse lattices. We find that L-CNNs are much more accurate at parametrizing the FP action compared to older approaches. They may therefore provide a way to circumvent critical slowing down and topological freezing towards the continuum limit.
- Europe > Austria > Vienna (0.14)
- North America > United States > California > San Joaquin County > Stockton (0.04)
- Europe > Switzerland > Bern > Bern (0.04)
Forward and Inverse Approximation Theory for Linear Temporal Convolutional Networks
We present a theoretical analysis of the approximation properties of convolutional architectures when applied to the modeling of temporal sequences. Specifically, we prove an approximation rate estimate (Jackson-type result) and an inverse approximation theorem (Bernstein-type result), which together provide a comprehensive characterization of the types of sequential relationships that can be efficiently captured by a temporal convolutional architecture. The rate estimate improves upon a previous result via the introduction of a refined complexity measure, whereas the inverse approximation theorem is new.
- Asia > Singapore (0.04)
- North America > United States > New York (0.04)
Geometrical aspects of lattice gauge equivariant convolutional neural networks
Aronsson, Jimmy, Müller, David I., Schuh, Daniel
Lattice gauge equivariant convolutional neural networks (L-CNNs) are a framework for convolutional neural networks that can be applied to non-Abelian lattice gauge theories without violating gauge symmetry. We demonstrate how L-CNNs can be equipped with global group equivariance. This allows us to extend the formulation to be equivariant not just under translations but under global lattice symmetries such as rotations and reflections. Additionally, we provide a geometric formulation of L-CNNs and show how convolutions in L-CNNs arise as a special case of gauge equivariant neural networks on SU($N$) principal bundles.
- Europe > Austria > Vienna (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
- Research Report (0.50)
- Overview (0.46)
Applications of Lattice Gauge Equivariant Neural Networks
Favoni, Matteo, Ipp, Andreas, Müller, David I.
The introduction of relevant physical information into neural network architectures has become a widely used and successful strategy for improving their performance. In lattice gauge theories, such information can be identified with gauge symmetries, which are incorporated into the network layers of our recently proposed Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs). L-CNNs can generalize better to differently sized lattices than traditional neural networks and are by construction equivariant under lattice gauge transformations. In these proceedings, we present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow. Our methods are based on neural ordinary differential equations which allow us to modify link configurations in a gauge equivariant manner. For simplicity, we focus on simple toy models to test these ideas in practice.
Preserving gauge invariance in neural networks
Favoni, Matteo, Ipp, Andreas, Müller, David I., Schuh, Daniel
In these proceedings we present lattice gauge equivariant convolutional neural networks (L-CNNs) which are able to process data from lattice gauge theory simulations while exactly preserving gauge symmetry. We review aspects of the architecture and show how L-CNNs can represent a large class of gauge invariant and equivariant functions on the lattice. We compare the performance of L-CNNs and non-equivariant networks using a non-linear regression problem and demonstrate how gauge invariance is broken for non-equivariant models.
- Overview (0.48)
- Research Report (0.40)
Lattice gauge symmetry in neural networks
Favoni, Matteo, Ipp, Andreas, Müller, David I., Schuh, Daniel
The concept of symmetry or equivariance under symmetry transformations is at the theoretical foundation of modern physics, and it is hard to overstate its importance. Noether's first theorem establishes a clear relationship between invariance of Lagrangians under continuous global symmetries and the existence of conserved quantities and conserved currents [1]. Global symmetries, as the name implies, are transformations that are applied the same way at every point in space time. In mechanical systems and field theories, energy and momentum conservation laws follow from invariance under space-time translations, whereas rotational invariance implies the conservation of angular momentum. More generally, global symmetry under the Poincaré group, which includes translations, rotations and boosts, is the foundation of special relativity.
- North America > United States > Massachusetts (0.04)
- Europe > Germany > Lower Saxony > Gottingen (0.04)
- Europe > Austria (0.04)
Lattice gauge equivariant convolutional neural networks
Favoni, Matteo, Ipp, Andreas, Müller, David I., Schuh, Daniel
Institute for Theoretical Physics, TU Wien, Austria (Dated: December 25, 2020) We propose Lattice gauge equivariant Convolutional Neural Networks (L-CNNs) for generic machine learning applications on lattice gauge theoretical problems. At the heart of this network structure is a novel convolutional layer that preserves gauge equivariance while forming arbitrarily shaped Wilson loops in successive bilinear layers. We demonstrate that L-CNNs can learn and generalize gauge invariant quantities that traditional convolutional neural networks are incapable of finding. Gauge field theories are an important cornerstone of larger symmetry space is available [33]. This impressive result was transported along a given closed path.