Exploiting Tractable Substructures in Intractable Networks
–Neural Information Processing Systems
We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory.
Neural Information Processing Systems
Apr-6-2023, 18:21:04 GMT
- Technology: