Over the last 12 months, I have been participating in a number of machine learning hackathons on Analytics Vidhya and Kaggle competitions. After the competition, I always make sure to go through winner's solution. The winner's solution usually provide me critical insights, which have helped me immensely in future competitions.
One thing you can do for example is to output an intermediate layer activation for every data row, then train your classifier/regressor/whatever on those activations rather than on the original features. Which layer to use depends on the problem you're trying to solve, but the closer to the final dense layers you get the more you're going to find things related to what the pre-trained model was trained on (at least, for networks which only inject the error signal at the end). An example of some Python/Lasagne code, lets say you're defining an architecture to load in VGG16, and you want to get one of the intermediate layers. You can further process those activations to get your features. However, generally speaking its probably better to clip off the final dense layers, replace them with something to output the thing you need on your problem, and train for a few epochs.
The progressive growing generative adversarial network is an approach for training a deep convolutional neural network model for generating synthetic images. It is an extension of the more traditional GAN architecture that involves incrementally growing the size of the generated image during training, starting with a very small image, such as a 4 4 pixels. This allows the stable training and growth of GAN models capable of generating very large high-quality images, such as images of synthetic celebrity faces with the size of 1024 1024 pixels. In this tutorial, you will discover how to develop progressive growing generative adversarial network models from scratch with Keras. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code. How to Implement Progressive Growing GAN Models in Keras Photo by Diogo Santos Silva, some rights reserved.
We present a Hidden Markov Model-based algorithm for constructing timescales for paleoclimate records by annual layer counting. This objective, statistics-based approach has a number of major advantages over the current manual approach, beginning with speed. Manual layer counting of a single core (up to 3km in length) can require multiple person-years of time; the StratiCounter algorithm can count up to 100 layers/min, corresponding to a full-length timescale constructed in a few days. Moreover, the algorithm gives rigorous uncertainty estimates for the resulting timescale, which are far smaller than those produced manually. We demonstrate the utility of StratiCounter by applying it to ice-core data from two cores from Greenland and Antarctica. Performance of the algorithm is comparable to a manual approach. When using all available data, false-discovery rates and miss rates are 1-1.2% and 1.2-1.6%, respectively, for the two cores. For one core, even better agreement is found when using only the chemistry series primarily employed by human experts in the manual approach.
The Keras Python library makes creating deep learning models fast and easy. The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs. The functional API in Keras is an alternate way of creating models that offers a lot more flexibility, including creating more complex models. In this tutorial, you will discover how to use the more flexible functional API in Keras to define deep learning models.