How to Train a Progressive Growing GAN in Keras for Synthesizing Faces

#artificialintelligence

Generative adversarial networks, or GANs, are effective at generating high-quality synthetic images. A limitation of GANs is that the are only capable of generating relatively small images, such as 64 64 pixels. The Progressive Growing GAN is an extension to the GAN training procedure that involves training a GAN to generate very small images, such as 4 4, and incrementally increasing the size of the generated images to 8 8, 16 16, until the desired output size is met. This has allowed the progressive GAN to generate photorealistic synthetic faces with 1024 1024 pixel resolution. The key innovation of the progressive growing GAN is the two-phase training procedure that involves the fading-in of new blocks to support higher-resolution images followed by fine-tuning. In this tutorial, you will discover how to implement and train a progressive growing generative adversarial network for generating celebrity faces. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code. Photo by Alessandro Caproni, some rights reserved. GANs are effective at generating crisp synthetic images, although are typically limited in the size of the images that can be generated.



How to Develop a Face Recognition System Using FaceNet in Keras

#artificialintelligence

Face recognition is a computer vision task of identifying and verifying a person based on a photograph of their face. FaceNet is a face recognition system developed in 2015 by researchers at Google that achieved then state-of-the-art results on a range of face recognition benchmark datasets. The FaceNet system can be used broadly thanks to multiple third-party open source implementations of the model and the availability of pre-trained models. The FaceNet system can be used to extract high-quality features from faces, called face embeddings, that can then be used to train a face identification system. In this tutorial, you will discover how to develop a face detection system using FaceNet and an SVM classifier to identify people from photographs. How to Develop a Face Recognition System Using FaceNet in Keras and an SVM Classifier Photo by Peter Valverde, some rights reserved. Face recognition is the general task of identifying and verifying people from photographs of their face.


How to Develop a Stacking Ensemble for Deep Learning Neural Networks in Python With Keras

#artificialintelligence

Model averaging is an ensemble technique where multiple sub-models contribute equally to a combined prediction. Model averaging can be improved by weighting the contributions of each sub-model to the combined prediction by the expected performance of the submodel. This can be extended further by training an entirely new model to learn how to best combine the contributions from each submodel. This approach is called stacked generalization, or stacking for short, and can result in better predictive performance than any single contributing model. In this tutorial, you will discover how to develop a stacked generalization ensemble for deep learning neural networks. How to Develop a Stacking Ensemble for Deep Learning Neural Networks in Python With Keras Photo by David Law, some rights reserved. A model averaging ensemble combines the predictions from multiple trained models.


How to Improve Performance With Transfer Learning for Deep Learning Neural Networks

#artificialintelligence

An interesting benefit of deep learning neural networks is that they can be reused on related problems. Transfer learning refers to a technique for predictive modeling on a different but somehow similar problem that can then be reused partly or wholly to accelerate the training and improve the performance of a model on the problem of interest. In deep learning, this means reusing the weights in one or more layers from a pre-trained network model in a new model and either keeping the weights fixed, fine tuning them, or adapting the weights entirely when training the model. In this tutorial, you will discover how to use transfer learning to improve the performance deep learning neural networks in Python with Keras. How to Improve Performance With Transfer Learning for Deep Learning Neural Networks Photo by Damian Gadal, some rights reserved. Transfer learning generally refers to a process where a model trained on one problem is used in some way on a second related problem.