Goto

Collaborating Authors

 ensemble method


EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models

Neural Information Processing Systems

Nevertheless, it encounters challenges related to ill-posed problems, resulting in deviations between single model predictions and ground-truths. Ensemble learning, as a powerful machine learning technique, aims to address these deviations by combining the predictions of multiple base models.








ConsistentInterpolatingEnsembles viatheManifold-HilbertKernel

Neural Information Processing Systems

To this end, wedefine themanifold-Hilbert kernelfordata distributed onaRiemannian manifold. We prove that kernel smoothing regression and classification using themanifold-Hilbert kernel areweakly consistent inthesetting ofDevroyeetal.


SupplementaryMaterial: Appendix BayesianDeepEnsemblesviatheNeuralTangentKernel ARecapofstandardandNTKparameterisations

Neural Information Processing Systems

We see that the different parameterisations yield the same distribution for the functional output f(,θ)atinitialisation, butgivedifferent scalings tothe parameter gradients inthe backward pass. GP(0,Θ L) and is independent off0() in the infinite width limit. Let X0 be an arbitrary test set. In fact, even with a heteroscedastic priorθ N(0,Λ) with a diagonal matrix Λ Rp p+ and diagonal entries {λj}pj=1, it is straightforward to show that the correct setting of regularisation iskθk2Λ = θ>Λ 1θ in order to obtain a posterior sample of θ. For an NN in the linearised regime [23], this is related to the fact that the NTK and standard parameterisations initialise parameters differently, yet yield the same functional distribution for a randomly initialised NN.