Reasoning by Using Statistics: Bayes Theorem with Prior Probabilities
"Probabalistic models based on directed acyclic graphs have a long and rich tradition, beginning with work by the geneticist Dewall Wright in the 1920s. Variants have appeared in many fields. Within statistics, such models are known as directed graphical models; within cognitive science and artificial intelligence, such models are known as Bayesian networks. The name honors the Rev. Thomas Bayes (1702-1761), whose rule for updating probabilities in the light of new evidence is the foundation of the approach. The initial development of Bayesian networks in the late 1970s was motivated by the need to model the top-down (semantic) and bottom -up (perceptual) combination of evidence in reading. The capability for bidirectional inferences, combined with a rigorous probabilistic foundation, led to the rapid emergence of Bayesian networks as the method of choice for uncertain reasoning in AI and expert systems, replacing earlier, ad hoc rule-based schemes ..."
- from Judea Pearl and Stuart Russell, "Bayesian Networks." In Michael A. Arbib, Ed., The Handbook of Brain Theory and Neural Networks, 2nd edition, MIT Press, 2003.