Intelligence in this "clean room" environment can be defined with respect to accumulated systematic methods of reasoning. So a logic proof system can be defined to be more efficient at solving a mathematical task than a comparable human. The more civilization transitions into a virtualized world, the more likely humans will find synthetic minds that are'more intelligent'. Classical definitions of computation tend to favor sequential processes. A consequence is that more natural parallel processes such as evolution tend to be ignored.
Humans by their nature have many cognitive biases. This can become detrimental to real scientific progress. Research tends to be bias in favor of approaches that many experts have invested countless years of study. The consequence of this is that we ignore many intrinsic characteristics found in the very system under study. Thus researchers can unfortunately consume a lifetime pursuing a wrong and pointless path. History is littered with research that in hindsight were discovered to be incorrect and therefore worthless.
A generating process can conjure up sufficient complexity that cannot be predicted using the bulk statistics that is observed. The algorithm to conjure up this deceptive distribution is very simple. Take an an existing dataset, perturb it slightly, and continue to maintain specific statistical properties. This is done by randomly selecting a point, add a small perturbation and then validating if the statistics are within targeted bounds. Now repeat these perturbation enough times and you can target different results.
There is a natural evolution from the ideas that deep learning has empirical revealed to a theory of general intelligence. A common criticism of deep learning is its lack of good theory. Deep learning is like the supercolliders in high energy physics. It reveals the inner behavior of an artificial intuitive process. It reveals to us patterns of what does work. To build up that theory we must walk back into the ideas of past thinkers. Thinkers who have never seen the empirical evidence. What will they conclude about their ideas if they had been exposed to evidence in deep learning?
At present, artificial intelligence in the form of machine learning is making impressive progress, especially the field of deep learning (DL) . Deep learning algorithms have been inspired from the beginning by nature, specifically by the human brain, in spite of our incomplete knowledge about its brain function. Learning from nature is a two-way process as discussed in , computing is learning from neuroscience, while neuroscience is quickly adopting information processing models. The question is, what can the inspiration from computational nature at this stage of the development contribute to deep learning and how much models and experiments in machine learning can motivate, justify and lead research in neuroscience and cognitive science and to practical applications of artificial intelligence.