Expectation Maximization Algorithm

#artificialintelligence 

Goal In today's summary we have a look at the expectation maximization algorithm that allows to optimize latent variable models when analytic inference of the posterior probability of latent variables is intractable. Motivation Latent variable models are itself interesting, because they are related to variational autoencoders and encoder-decoder frameworks that are popular in unsupervised and semi-supervised learning. They allow to sample from the data distribution and are believed to enhance the expressiveness of the hierarchical recurrent encoder decoder models. We can think of them as memorizing higher abstract information, such as emotional states that allow to generate sentimental utterances in the encoder. Steps In general we are concerned with finding good models, which means determining parameters of this model that can explain the data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found