In this paper, a method for automatically deriving energy-preserving numerical methods for the Euler-Lagrange equation and the Hamilton equation is proposed. The derived energy-preserving scheme is based on the discrete gradient method. In the proposed approach, the discrete gradient, which is a key tool for designing the scheme, is automatically computed by a similar algorithm to the automatic differentiation. Besides, the discrete gradient coincides with the usual gradient if the two arguments required to define the discrete gradient are the same. Hence the proposed method is an extension of the automatic differentiation in the sense that the proposed method derives not only the discrete gradient but also the usual gradient. Due to this feature, both energy-preserving integrators and variational (and hence symplectic) integrators can be implemented in the same programming code simultaneously. This allows users to freely switch between the energy-preserving numerical method and the symplectic numerical method in accordance with the problem-setting and other requirements. As applications, an energy-preserving numerical scheme for a nonlinear wave equation and a training algorithm of artificial neural networks derived from an energy-dissipative numerical scheme are shown.
This article is the rejoinder for the paper "Probabilistic Integration: A Role in Statistical Computation?" to appear in Statistical Science with discussion [Briol et al., 2015]. We would first like to thank the reviewers and many of our colleagues who helped shape this paper, the editor for selecting our paper for discussion, and of course all of the discussants for their thoughtful, insightful and constructive comments. In this rejoinder, we respond to some of the points raised by the discussants and comment further on the fundamental questions underlying the paper: - Should Bayesian ideas be used in numerical analysis? Numerical analysis is concerned with the approximation of typically high or infinite-dimensional mathematical quantities using discretisations of the space on which these are defined. Different discretisation schemes lead to different numerical algorithms, whose stability and convergence properties need to be carefully assessed.
We investigate the problem of mining numerical data with Formal Concept Analysis. The usual way is to use a scaling procedure --transforming numerical attributes into binary ones -- leading either to a loss of information or of efficiency, in particular w.r.t. the volume of extracted patterns. By contrast, we propose to directly work on numerical data in a more precise and efficient way. For that, the notions of closed patterns, generators and equivalent classes are revisited in the numerical context. Moreover, two original algorithms are proposed and tested in an evaluation involving real-world data, showing the quality of the present approach.
Abstract: Python provides a powerful platform for working with data, but often the most straightforward data analysis can be painfully slow. When used effectively, though, Python can be as fast as even compiled languages like C. This talk presents an overview of how to effectively approach optimization of numerical code in Python, touching on tools like numpy, pandas, scipy, cython, numba, and more.
The mode is one of the basic statistics which is defined as the most common value over an array. When the values of the array are categorical, the mode is easy to detect by selecting the one with the most occurrence. The problem of identifying the modes on a numerical array is harder since the values can be continuous and therefore count the occurrences by value is not enough, so the distribution of these values must be checked in order to identify the most probable values. However, numerical arrays can be multi-modal which reduces the problem to finding local maxima on the distribution instead of the global maximum where only one mode is present. Finding histograms is one of the easiest ways to find the distribution of a numerical array.