Goto

Collaborating Authors

Visualizing MNIST: An Exploration of Dimensionality Reduction - colah's blog

#artificialintelligence

At some fundamental level, no one understands machine learning. It isn't a matter of things being too complicated. Almost everything we do is fundamentally very simple. Unfortunately, an innate human handicap interferes with us understanding these simple things. Humans evolved to reason fluidly about two and three dimensions. With some effort, we may think in four dimensions. Machine learning often demands we work with thousands of dimensions – or tens of thousands, or millions! Even very simple things become hard to understand when you do them in very high numbers of dimensions. Reasoning directly about these high dimensional spaces is just short of hopeless.


Visualizing MNIST: An Exploration of Dimensionality Reduction - colah's blog

#artificialintelligence

At some fundamental level, no one understands machine learning. It isn't a matter of things being too complicated. Almost everything we do is fundamentally very simple. Unfortunately, an innate human handicap interferes with us understanding these simple things. Humans evolved to reason fluidly about two and three dimensions. With some effort, we may think in four dimensions.


Deep Learning, NLP, and Representations - colah's blog

#artificialintelligence

In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. But despite the results, we have to wonder… why do they work so well? In doing so, I hope to make accessible one promising answer as to why deep neural networks work. I think it's a very elegant perspective. A neural network with a hidden layer has universality: given enough hidden units, it can approximate any function. This is a frequently quoted – and even more frequently, misunderstood and applied – theorem.


Deep Learning, NLP, and Representations - colah's blog

#artificialintelligence

In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. But despite the results, we have to wonder… why do they work so well? In doing so, I hope to make accessible one promising answer as to why deep neural networks work. I think it's a very elegant perspective. A neural network with a hidden layer has universality: given enough hidden units, it can approximate any function.


Deep Learning, NLP, and Representations - colah's blog

#artificialintelligence

In the last few years, deep neural networks have dominated pattern recognition. They blew the previous state of the art out of the water for many computer vision tasks. Voice recognition is also moving that way. But despite the results, we have to wonder… why do they work so well? In doing so, I hope to make accessible one promising answer as to why deep neural networks work. I think it's a very elegant perspective. A neural network with a hidden layer has universality: given enough hidden units, it can approximate any function. This is a frequently quoted – and even more frequently, misunderstood and applied – theorem.