I'm definitely not in some reputed position to be commenting or throwing irrational opinions around or take sides on such great thought processes that had been born decades before I was born, but consider these as thoughts of someone who has been closely following the works of pioneers of the field.
Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a'series' type input with no predetermined size. RNNs are designed to take a series of input with no predetermined limit on size. One could ask what's the big deal, I can call a regular NN repeatedly too? Sure can, but the'series' part of the input means something.
Deep learning models like the Convolutional Neural Network (CNN) have a massive number of parameters; we can actually call these hyper-parameters because they are not optimized inherently in the model. You could gridsearch the optimal values for these hyper-parameters, but you'll need a lot of hardware and time. So, does a true data scientist settle for guessing these essential parameters? One of the best ways to improve your models is to build on the design and architecture of the experts who have done deep research in your domain, often with powerful hardware at their disposal. Here's how to modify dropout and limit weight sizes in Keras with MNIST:
Over the past several years, deep learning has become the go-to technique for most AI type problems, overshadowing classical machine learning. The clear reason for this is that deep learning has repeatedly demonstrated its superior performance on a wide variety of tasks including speech, natural language, vision, and playing games. Yet although deep learning has such high performance, there are still a few advantages to using classical machine learning and a number of specific situations where you'd be much better off using something like a linear regression or decision tree rather than a big deep network. In this post we're going to compare and contrast deep learning vs classical machine learning techniques. In doing so we'll identify the pros and cons of both techniques and where/how they are best used.
This post walks through a complete example illustrating an essential data science building block: the underfitting vs. overfitting problem. The author explores the problem through a beginner's implementation of cross-validation. The wide growth of deep learning has complicated things a bit in the hardware department. This post will walk through the different types of computer chips, where they're available, and which ones are the best to boost your performance. One of the most common problems in data science is that of dealing with missing values.