Automatic discrete differentiation and its applications

arXiv.org Artificial Intelligence

In this paper, a method for automatically deriving energy-preserving numerical methods for the Euler-Lagrange equation and the Hamilton equation is proposed. The derived energy-preserving scheme is based on the discrete gradient method. In the proposed approach, the discrete gradient, which is a key tool for designing the scheme, is automatically computed by a similar algorithm to the automatic differentiation. Besides, the discrete gradient coincides with the usual gradient if the two arguments required to define the discrete gradient are the same. Hence the proposed method is an extension of the automatic differentiation in the sense that the proposed method derives not only the discrete gradient but also the usual gradient. Due to this feature, both energy-preserving integrators and variational (and hence symplectic) integrators can be implemented in the same programming code simultaneously. This allows users to freely switch between the energy-preserving numerical method and the symplectic numerical method in accordance with the problem-setting and other requirements. As applications, an energy-preserving numerical scheme for a nonlinear wave equation and a training algorithm of artificial neural networks derived from an energy-dissipative numerical scheme are shown.


Rejoinder for "Probabilistic Integration: A Role in Statistical Computation?"

arXiv.org Machine Learning

This article is the rejoinder for the paper "Probabilistic Integration: A Role in Statistical Computation?" to appear in Statistical Science with discussion [Briol et al., 2015]. We would first like to thank the reviewers and many of our colleagues who helped shape this paper, the editor for selecting our paper for discussion, and of course all of the discussants for their thoughtful, insightful and constructive comments. In this rejoinder, we respond to some of the points raised by the discussants and comment further on the fundamental questions underlying the paper: - Should Bayesian ideas be used in numerical analysis? Numerical analysis is concerned with the approximation of typically high or infinite-dimensional mathematical quantities using discretisations of the space on which these are defined. Different discretisation schemes lead to different numerical algorithms, whose stability and convergence properties need to be carefully assessed.


Seven Strategies for Optimizing Numerical Code

@machinelearnbot

Abstract: Python provides a powerful platform for working with data, but often the most straightforward data analysis can be painfully slow. When used effectively, though, Python can be as fast as even compiled languages like C. This talk presents an overview of how to effectively approach optimization of numerical code in Python, touching on tools like numpy, pandas, scipy, cython, numba, and more.


A Note on Numerical Modality in Large Datasets

#artificialintelligence

The mode is one of the basic statistics which is defined as the most common value over an array. When the values of the array are categorical, the mode is easy to detect by selecting the one with the most occurrence. The problem of identifying the modes on a numerical array is harder since the values can be continuous and therefore count the occurrences by value is not enough, so the distribution of these values must be checked in order to identify the most probable values. However, numerical arrays can be multi-modal which reduces the problem to finding local maxima on the distribution instead of the global maximum where only one mode is present. Finding histograms is one of the easiest ways to find the distribution of a numerical array.


A Bulirsch-Stoer algorithm using Gaussian processes

arXiv.org Machine Learning

In this paper, we treat the problem of evaluating the asymptotic error in a numerical integration scheme as one with inherent uncertainty. Adding to the growing field of probabilistic numerics, we show that Gaussian process regression (GPR) can be embedded into a numerical integration scheme to allow for (i) robust selection of the adaptive step-size parameter and; (ii) uncertainty quantification in predictions of putatively converged numerical solutions. We present two examples of our approach using Richardson's extrapolation technique and the Bulirsch-Stoer algorithm. In scenarios where the error-surface is smooth and bounded, our proposed approach can match the results of the traditional polynomial (parametric) extrapolation methods. In scenarios where the error surface is not well approximated by a finite-order polynomial, e.g. in the vicinity of a pole or in the assessment of a chaotic system, traditional methods can fail, however, the non-parametric GPR approach demonstrates the potential to continue to furnish reasonable solutions in these situations.