Old Photo Restoration using Deep Learning

#artificialintelligence 

As you can see in these images, there is a big difference between the synthesized old images and the real old ones. You can see that the synthesized image is already in high definition even with the fake scratches and color changes compared to the other one that contains way fewer details. They addressed this issue by creating their own new network specifically for the task. Basically, they used two variational auto-encoders, also called VAEs, to respectively transform old (degraded) and clean (restored) photos into two latent space. This translation into latent spaces is learned through synthetic paired data but is able to generalize well on real photos since this same domain gap is way smaller on such compact latent spaces. The domain gap from the two latent spaces produced by the VAEs is closed by jointly training an adversarial discriminator.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found