Supervised Learning of Probability Distributions by Neural Networks
–Neural Information Processing Systems
Abstract: We propose that the back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood, rather than the'error'. In the past thirty years many researchers have studied the question of supervised learning in'neural'-like networks. Recently a learning algorithm called'back propagation In these applications, it would often be natural to consider imperfect, or probabilistic information. The problem of supervised learning is to model some mapping between input vectors and output vectors presented to us by some real world phenomena. To be specific, coqsider the question of medical diagnosis.
Neural Information Processing Systems
Dec-31-1988