Goto

Collaborating Authors

How to build Ensemble Models in machine learning? (with code in R)

@machinelearnbot

Over the last 12 months, I have been participating in a number of machine learning hackathons on Analytics Vidhya and Kaggle competitions. After the competition, I always make sure to go through winner's solution. The winner's solution usually provide me critical insights, which have helped me immensely in future competitions.


How to Implement Progressive Growing GAN Models in Keras

#artificialintelligence

The progressive growing generative adversarial network is an approach for training a deep convolutional neural network model for generating synthetic images. It is an extension of the more traditional GAN architecture that involves incrementally growing the size of the generated image during training, starting with a very small image, such as a 4 4 pixels. This allows the stable training and growth of GAN models capable of generating very large high-quality images, such as images of synthetic celebrity faces with the size of 1024 1024 pixels. In this tutorial, you will discover how to develop progressive growing generative adversarial network models from scratch with Keras. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code. How to Implement Progressive Growing GAN Models in Keras Photo by Diogo Santos Silva, some rights reserved.


How to Use the Keras Functional API for Deep Learning - Machine Learning Mastery

@machinelearnbot

The Keras Python library makes creating deep learning models fast and easy. The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs. The functional API in Keras is an alternate way of creating models that offers a lot more flexibility, including creating more complex models. In this tutorial, you will discover how to use the more flexible functional API in Keras to define deep learning models.


Image Compression Using Autoencoders in Keras Paperspace Blog

#artificialintelligence

Autoencoders are a deep learning model for transforming data from a high-dimensional space to a lower-dimensional space. They work by encoding the data, whatever its size, to a 1-D vector. This vector can then be decoded to reconstruct the original data (in this case, an image). The more accurate the autoencoder, the closer the generated data is to the original. In this tutorial we'll explore the autoencoder architecture and see how we can apply this model to compress images from the MNIST dataset using TensorFlow and Keras.


Convolutional Reservoir Computing for World Models

arXiv.org Machine Learning

Recently, reinforcement learning models have achieved great success, completing complex tasks such as mastering Go and other games with higher scores than human players. Many of these models collect considerable data on the tasks and improve accuracy by extracting visual and time-series features using convolutional neural networks (CNNs) and recurrent neural networks, respectively. However, these networks have very high computational costs because they need to be trained by repeatedly using a large volume of past playing data. In this study, we propose a novel practical approach called reinforcement learning with convolutional reservoir computing (RCRC) model. The RCRC model has several desirable features: 1. it can extract visual and time-series features very fast because it uses random fixed-weight CNN and the reservoir computing model; 2. it does not require the training data to be stored because it extracts features without training and decides action with evolution strategy. Furthermore, the model achieves state of the art score in the popular reinforcement learning task. Incredibly, we find the random weight-fixed simple networks like only one dense layer network can also reach high score in the RL task.