Wahlström, Niklas
Physics-informed neural networks with unknown measurement noise
Pilar, Philipp, Wahlström, Niklas
Physics-informed neural networks (PINNs) constitute a flexible approach to both finding solutions and identifying parameters of partial differential equations. Most works on the topic assume noiseless data, or data contaminated with weak Gaussian noise. We show that the standard PINN framework breaks down in case of non-Gaussian noise. We give a way of resolving this fundamental issue and we propose to jointly train an energy-based model (EBM) to learn the correct noise distribution. We illustrate the improved performance of our approach using multiple examples.
Probabilistic matching of real and generated data statistics in generative adversarial networks
Pilar, Philipp, Wahlström, Niklas
Generative adversarial networks constitute a powerful approach to generative modeling. While generated samples often are indistinguishable from real data, there is no guarantee that they will follow the true data distribution. In this work, we propose a method to ensure that the distributions of certain generated data statistics coincide with the respective distributions of the real data. In order to achieve this, we add a Kullback-Leibler term to the generator loss function: the KL divergence is taken between the true distributions as represented by a conditional energy-based model, and the corresponding generated distributions obtained from minibatch values at each iteration. We evaluate the method on a synthetic dataset and two real-world datasets and demonstrate improved performance of our method.
Invertible Kernel PCA with Random Fourier Features
Gedon, Daniel, Ribeiro, Antôni H., Wahlström, Niklas, Schön, Thomas B.
Kernel principal component analysis (kPCA) is a widely studied method to construct a low-dimensional data representation after a nonlinear transformation. The prevailing method to reconstruct the original input signal from kPCA -- an important task for denoising -- requires us to solve a supervised learning problem. In this paper, we present an alternative method where the reconstruction follows naturally from the compression step. We first approximate the kernel with random Fourier features. Then, we exploit the fact that the nonlinear transformation is invertible in a certain subdomain. Hence, the name \emph{invertible kernel PCA (ikPCA)}. We experiment with different data modalities and show that ikPCA performs similarly to kPCA with supervised reconstruction on denoising tasks, making it a strong alternative.
Incorporating Sum Constraints into Multitask Gaussian Processes
Pilar, Philipp, Jidling, Carl, Schön, Thomas B., Wahlström, Niklas
Machine learning models can be improved by adapting them to respect existing background knowledge. In this paper we consider multitask Gaussian processes, with background knowledge in the form of constraints that require a specific sum of the outputs to be constant. This is achieved by conditioning the prior distribution on the constraint fulfillment. The approach allows for both linear and nonlinear constraints. We demonstrate that the constraints are fulfilled with high precision and that the construction can improve the overall prediction accuracy as compared to the standard Gaussian process.
Deep Convolutional Networks in System Identification
Andersson, Carl, Ribeiro, Antônio H., Tiels, Koen, Wahlström, Niklas, Schön, Thomas B.
Recent developments within deep learning are relevant for nonlinear system identification problems. In this paper, we establish connections between the deep learning and the system identification communities. It has recently been shown that convolutional architectures are at least as capable as recurrent architectures when it comes to sequence modeling tasks. Inspired by these results we explore the explicit relationships between the recently proposed temporal convolutional network (TCN) and two classic system identification model structures; Volterra series and block-oriented models. We end the paper with an experimental study where we provide results on two real-world problems, the well-known Silverbox dataset and a newer dataset originating from ground vibration experiments on an F-16 fighter aircraft.
Probabilistic approach to limited-data computed tomography reconstruction
Purisha, Zenith, Jidling, Carl, Wahlström, Niklas, Särkkä, Simo, Schön, Thomas B.
We consider the problem of reconstructing the internal structure of an object from limited x-ray projections. In this work, we use a Gaussian process to model the target function. In contrast to other established methods, this comes with the advantage of not requiring any manual parameter tuning, which usually arises in classical regularization strategies. The Gaussian process is well-known in a heavy computation for the inversion of a covariance matrix, and in this work, by employing an approximative spectral-based technique, we reduce the computational complexity and avoid the need of numerical integration. Results from simulated and real data indicate that this approach is less sensitive to streak artifacts as compared to the commonly used method of filteredback projection, an analytic reconstruction algorithm using Radon inversion formula.
Modeling and interpolation of the ambient magnetic field by Gaussian processes
Solin, Arno, Kok, Manon, Wahlström, Niklas, Schön, Thomas B., Särkkä, Simo
Anomalies in the ambient magnetic field can be used as features in indoor positioning and navigation. By using Maxwell's equations, we derive and present a Bayesian non-parametric probabilistic modeling approach for interpolation and extrapolation of the magnetic field. We model the magnetic field components jointly by imposing a Gaussian process (GP) prior on the latent scalar potential of the magnetic field. By rewriting the GP model in terms of a Hilbert space representation, we circumvent the computational pitfalls associated with GP modeling and provide a computationally efficient and physically justified modeling tool for the ambient magnetic field. The model allows for sequential updating of the estimate and time-dependent changes in the magnetic field. The model is shown to work well in practice in different applications: we demonstrate mapping of the magnetic field both with an inexpensive Raspberry Pi powered robot and on foot using a standard smartphone.
Data-Driven Impulse Response Regularization via Deep Learning
Andersson, Carl, Wahlström, Niklas, Schön, Thomas B.
We consider the problem of impulse response estimation for stable linear single-input single-output systems. It is a well-studied problem where flexible non-parametric models recently offered a leap in performance compared to the classical finite-dimensional model structures. Inspired by this development and the success of deep learning we propose a new flexible data-driven model. Our experiments indicate that the new model is capable of exploiting even more of the hidden patterns that are present in the input-output data as compared to the non-parametric models.
Linearly constrained Gaussian processes
Jidling, Carl, Wahlström, Niklas, Wills, Adrian, Schön, Thomas B.
We consider a modification of the covariance function in Gaussian processes to correctly account for known linear operator constraints. By modeling the target function as a transformation of an underlying function, the constraints are explicitly incorporated in the model such that they are guaranteed to be fulfilled by any sample drawn or prediction made. We also propose a constructive procedure for designing the transformation operator and illustrate the result on both simulated and real-data examples.
Linearly constrained Gaussian processes
Jidling, Carl, Wahlström, Niklas, Wills, Adrian, Schön, Thomas B.
We consider a modification of the covariance function in Gaussian processes to correctly account for known linear constraints. By modelling the target function as a transformation of an underlying function, the constraints are explicitly incorporated in the model such that they are guaranteed to be fulfilled by any sample drawn or prediction made. We also propose a constructive procedure for designing the transformation operator and illustrate the result on both simulated and real-data examples.