Deep learning has provided the world of data science with highly effective tools that can address problems in virtually any domain, and using nearly any kind of data. However, the non-intuitive features deduced and used by deep learning algorithms require a very careful experimental design, and a failure to meet that requirement can lead to miserably flawed results, regardless of the quality of the data or the structure of the deep learning network.
How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. Papers published at the Neural Information Processing Systems Conference.
I believe almost all readers of this blog already know well about Deep Learning and Convolutional Neural Network (CNN)... so here I just show you a brief overview. CNN is a variant of Deep Learning and it has been well known for its excellent performance of image recognition. In particular, after CNN won ILSVRC 2012, CNN has gotten more and more popular in image recognition. The most recent success of CNN would be AlphaGo, I believe. Indeed, we already have a lot of implementation of CNN as libraries / packages.
Reinforcement learning (RL) practitioners have produced a number of excellent tutorials. Most, however, describe RL in terms of mathematical equations and abstract diagrams. We like to think of the field from a different perspective. RL itself is inspired by how animals learn, so why not translate the underlying RL machinery back into the natural phenomena they're designed to mimic? Humans learn best through stories.