Electrification was, without a doubt, the greatest engineering marvel of the 20th century. The electric motor was invented way back in 1821, and the electrical circuit was mathematically analyzed in 1827. But factory electrification, household electrification, and railway electrification all started slowly several decades later. The field of AI was formally founded in 1956. But it's only now--more than six decades later--that AI is expected to revolutionize the way humanity will live and work in the coming decades.
We all hear these terms being thrown around and often used interchangeably; some of us tag along without knowing what they mean, or we might see them as buzzwords, and others claim to know -- and do -- what these terms really entail. Note that the distinctions between these terms aren't clear-cut, but this will give a sense of the general uses of the terms, how they are related to one another, and how all are threaded together by data science. Artificial Intelligence describes machines that can perform tasks resembling those of humans. So AI implies machines that artificially model human intelligence. AI systems help us manage, model, and analyze complex systems.
I'm definitely not in some reputed position to be commenting or throwing irrational opinions around or take sides on such great thought processes that had been born decades before I was born, but consider these as thoughts of someone who has been closely following the works of pioneers of the field.
Deep learning models like the Convolutional Neural Network (CNN) have a massive number of parameters; we can actually call these hyper-parameters because they are not optimized inherently in the model. You could gridsearch the optimal values for these hyper-parameters, but you'll need a lot of hardware and time. So, does a true data scientist settle for guessing these essential parameters? One of the best ways to improve your models is to build on the design and architecture of the experts who have done deep research in your domain, often with powerful hardware at their disposal. Here's how to modify dropout and limit weight sizes in Keras with MNIST:
Over the past several years, deep learning has become the go-to technique for most AI type problems, overshadowing classical machine learning. The clear reason for this is that deep learning has repeatedly demonstrated its superior performance on a wide variety of tasks including speech, natural language, vision, and playing games. Yet although deep learning has such high performance, there are still a few advantages to using classical machine learning and a number of specific situations where you'd be much better off using something like a linear regression or decision tree rather than a big deep network. In this post we're going to compare and contrast deep learning vs classical machine learning techniques. In doing so we'll identify the pros and cons of both techniques and where/how they are best used.