The oddest thing about Artificial Neural Networks is that they actually work despite being based on a completely false model of a biological neuron. Why Artificial Neural Networks (ANN) work remains a mystery. Understanding that "Why" can inform us why real biological neurons might work. We can make progress by identifying universal characteristics between the biological and the synthetic. One universality that we can be certain of is that both biological and artificial neurons are pattern-matching machines.
What's not apparent to many practitioners and researchers in Deep Learning is that the rich variety of methods developed over the past several years are relevant to systems that have different kinds of goals. Deep Learning arose from the Machine Learning community, so it is natural to think of DL networks as systems suitable for performing predictions. Predictions are unfortunately too broad a goal and thus leads to a lack of specificity as to the appropriate methods to fine tune a solution. What I mean here is that you can cast almost any intelligent goals as that of making a prediction. However, to be successful, one has to a minimum understand what kind of prediction is being made and this leads towards a more pragmatic understanding of whether the right tools for the job are used.
At present, artificial intelligence in the form of machine learning is making impressive progress, especially the field of deep learning (DL) . Deep learning algorithms have been inspired from the beginning by nature, specifically by the human brain, in spite of our incomplete knowledge about its brain function. Learning from nature is a two-way process as discussed in , computing is learning from neuroscience, while neuroscience is quickly adopting information processing models. The question is, what can the inspiration from computational nature at this stage of the development contribute to deep learning and how much models and experiments in machine learning can motivate, justify and lead research in neuroscience and cognitive science and to practical applications of artificial intelligence.
First, let's explore the latest research on "The social and cultural roots of whale and dolphin brains" published in Nature. One of the unsolved problems of AGI research is our lack of understanding of the definition of "Generalization". I've pointed this out in my previous writing. Most of the definitions created for "Generalization" is incomplete. Definitions are either too narrow or even worse incorrect.
How can we understand progress in Deep Learning without a map? I created one such map a couple years ago, but this map needs a drastic overhaul. In "Five Capability Levels of Deep Learning Intelligence", I proposed a hierarchy of capabilities that was meant to inform the progress of Deep Learning development. So specifically, you begin with a feed forward network in the first level. That would be followed by memory enhanced networks, examples of which would include LSTM and Neural Turing Machine (NTM).