I wonder what Arthur Koestler would think of Google. The Hungarian writer's 1967 book, The Ghost in the Machine, is an elegant takedown of Cartesian philosophy. Koestler believed the feeling of dualism arises from what he termed a holon--the mind is, simultaneously, a part and a whole. The brain, he argues, is the outcome of an array of forces, including the environment, habitual patterns, and language. In other words, its operations must be guided, on the one hand, by its own fixed canon of rules and on the other hand by points from a variable environment.
The network is trained to simply reproduce its input, and so can as a nonlinear version of Kohonen's (1977) auto-associator. However it must through a narrow channel of hidden units, so it must extract regularities from the during learning. Empirical analysis of the trained network showed that the span the principal subspace of the image vectors, with some noise on the component due to network nonlinearity (Cottrell & Munro, 1988).
This problem can to some extent be avoided by stopping learning early. How does one tell when to stop? One method is to partition the training patterns into two sets (assuming that there are enough of them). The larger part of the training patterns, say 80% of them, chosen at random, form the training set, and the remaining 20% are referred to as the test set. Every now and again during training, one measures the performance of the current set of weights on the test set.