Sequential Function-Space Variational Inference via Gaussian Mixture Approximation
Zhu, Menghao Waiyan William, Hao, Pengcheng, Kuruoğlu, Ercan Engin
Continual learning is learning from a sequence of tasks with the aim of learning new tasks without forgetting old tasks. Sequential function-space variational inference (SFSVI) is a continual learning method based on variational inference which uses a Gaussian variational distribution to approximate the distribution of the outputs of a finite number of selected inducing points. Since the posterior distribution of a neural network is multi-modal, a Gaussian distribution could only match one mode of the posterior distribution, and a Gaussian mixture distribution could be used to better approximate the posterior distribution. We propose an SFSVI method which uses a Gaussian mixture variational distribution. We also compare different types of variational inference methods with and without a fixed pre-trained feature extractor. We find that in terms of final average accuracy, Gaussian mixture methods perform better than Gaussian methods and likelihood-focused methods perform better than prior-focused methods.
Mar-10-2025
- Country:
- Asia > China
- Guangdong Province > Shenzhen (0.04)
- Europe
- North America > United States (0.04)
- Asia > China
- Genre:
- Research Report (0.40)
- Industry:
- Health & Medicine (1.00)