to

### Clustering of non-Gaussian data by variational Bayes for normal inverse Gaussian mixture models

Finite mixture models, typically Gaussian mixtures, are well known and widely used as model-based clustering. In practical situations, there are many non-Gaussian data that are heavy-tailed and/or asymmetric. Normal inverse Gaussian (NIG) distributions are normal-variance mean which mixing densities are inverse Gaussian distributions and can be used for both haavy-tail and asymmetry. For NIG mixture models, both expectation-maximization method and variational Bayesian (VB) algorithms have been proposed. However, the existing VB algorithm for NIG mixture have a disadvantage that the shape of the mixing density is limited. In this paper, we propose another VB algorithm for NIG mixture that improves on the shortcomings. We also propose an extension of Dirichlet process mixture models to overcome the difficulty in determining the number of clusters in finite mixture models. We evaluated the performance with artificial data and found that it outperformed Gaussian mixtures and existing implementations for NIG mixtures, especially for highly non-normative data.

### Deep learning for biomedical photoacoustic imaging: A review

Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability

### Least Square Variational Bayesian Autoencoder with Regularization

In recent years Variation Autoencoders have become one of the most popular unsupervised learning of complicated distributions.Variational Autoencoder (VAE) provides more efficient reconstructive performance over a traditional autoencoder. Variational auto enocders make better approximaiton than MCMC. The VAE defines a generative process in terms of ancestral sampling through a cascade of hidden stochastic layers. They are a directed graphic models. Variational autoencoder is trained to maximise the variational lower bound. Here we are trying maximise the likelihood and also at the same time we are trying to make a good approximation of the data. Its basically trading of the data log-likelihood and the KL divergence from the true posterior. This paper describes the scenario in which we wish to find a point-estimate to the parameters $\theta$ of some parametric model in which we generate each observations by first sampling a local latent variable and then sampling the associated observation. Here we use least square loss function with regularization in the the reconstruction of the image, the least square loss function was found to give better reconstructed images and had a faster training time.

### Revisiting Bayesian Autoencoders with MCMC

Bayes' theorem is used as foundation Autoencoders are a family of unsupervised learning methods for inference in Bayesian neural networks, and Markov that use neural network architectures and learning algorithms chain Monte Carlo (MCMC) sampling methods [25] are used to learn a lower-dimensional representation (encoding) for constructing the posterior distribution. Variational inference of the data, which can then be used to reconstruct a representation [26] is another way to approximate the posterior distribution, close to the original input. They thus facilitate dimensionality which approximates an intractable posterior distribution by a reduction for prediction and classification [1, 2], and have tractable one. This makes it particularly suited to large data been successfully applied to image classification [3, 4], face sets and models, and so it has been popular for autoencoders recognition [5, 6], geoscience and remote sensing [7], speechbased and neural networks [13, 27].

### Invertible Neural Networks for Uncertainty Quantification in Photoacoustic Imaging

Multispectral photoacoustic imaging (PAI) is an emerging imaging modality which enables the recovery of functional tissue parameters such as blood oxygenation. However, the underlying inverse problems are potentially ill-posed, meaning that radically different tissue properties may - in theory - yield comparable measurements. In this work, we present a new approach for handling this specific type of uncertainty by leveraging the concept of conditional invertible neural networks (cINNs). Specifically, we propose going beyond commonly used point estimates for tissue oxygenation and converting single-pixel initial pressure spectra to the full posterior probability density. This way, the inherent ambiguity of a problem can be encoded with multiple modes in the output. Based on the presented architecture, we demonstrate two use cases which leverage this information to not only detect and quantify but also to compensate for uncertainties: (1) photoacoustic device design and (2) optimization of photoacoustic image acquisition. Our in silico studies demonstrate the potential of the proposed methodology to become an important building block for uncertainty-aware reconstruction of physiological parameters with PAI.