This is the age of artificial intelligence. Machine Learning and predictive analytics are now established and integral to just about every modern businesses, but artificial intelligence expands the scale of what's possible within those fields. It's what makes deep learning possible. Systems with greater ostensible autonomy and complexity can solve similarly complex problems.
This is the first in a series of posts about machine learning concepts, where we'll cover everything from learning styles to new dimensions in machine learning research. What makes machine learning so successful? The answer lies in the core concept of machine learning: a machine can learn from examples and experience. Before machine learning, machines were programmed with specific instructions and had no need to learn on their own. A machine (without machine learning) is born knowing exactly what it's supposed to do and how to do it, like a robot arm on an assembly line.
I'm very excited to announce /r/MachineLearning is trying a new AMA format in collaboration with the organizers of the Deep Learning Workshop at ICML 2016: In this year's ICML Deep Learning Workshop, we depart from previous years' formats and experiment with a completely new format. The workshop will be split into two sessions, each consisting of a set of invited talks followed by a panel discussion. By organizing the workshop in this manner we aim to promote focused discussions that dive deep into important areas and also increase interaction between speakers and the audience. The second (afternoon) session of the workshop aims at answering the question "What does simulation-based learning bring to the table?" Under this broad theme, more specific questions may include "How transferrable is the knowledge learned from a simulation to the real world?",
Thanks to cheaper and bigger storage we have more data than what we had a couple of years back. We do owe our thanks to Big Data no matter how much hype it has created. However, the real MVP here is faster and better computing,which made papers from the 1980s and 90s more relevant (LSTMs were actually invented in 1997)! We are finally able to leverage the true power of neural networks and deep learning thanks to better and faster CPUs and GPUs. Whether we like it or not, traditional statistical and machine learning models have severe limitations on problems with high-dimensionality, unstructured data, more complexity and large volumes of data.