Understanding AutoEncoders with an Example: A Step-by-Step Tutorial
This is the second (and last) article of the "Understanding AutoEncoders with an example" series. In the first article, we generated a synthetic dataset and built a vanilla autoencoder to reconstruct images of circles. We'll be using the same dataset once again, so please check the section "An MNIST-like Dataset of Circles" for a refresher, if needed. We'll also understand what the famous reparametrization trick is, and the role of the Kullback-Leibler divergence/loss. You're invited to read this series of articles while running its accompanying notebook, available on my GitHub's "Accompanying Notebooks" repository, using Google Colab: Moreover, I built a Table of Contents to help you navigate the topics across the two articles, should you use it as a mini-course and work your way through the content one topic at a time.
Jun-7-2022, 19:45:09 GMT