Quantum Maximum Entropy Inference and Hamiltonian Learning
Gao, Minbo, Ji, Zhengfeng, Wei, Fuchao
–arXiv.org Artificial Intelligence
Maximum entropy inference is a widely used method in machine learning, particularly in the context of graphical models (McCallum et al., 2000; Kindermann & Snell, 1980; Ackley et al., 1985; Bresler, 2015; Hamilton et al., 2017) and natural language processing (Berger et al., 1996). In graphical models, it is known as the backward mapping, the problem of computing the model parameters from the marginal information (Wainwright & Jordan, 2007). The inverse problem of estimating marginal parameters from the model parameters is called the forward mapping. Maximum entropy inference is also a core concept in statistical physics (Jaynes, 1957) known as the Jaynes' principle which links statistical mechanics and information theory. The Hammersley-Clifford theorem establishes that, in the classical case, any positive probability distribution satisfying the local Markov property can be represented as a Gibbs distribution (Lafferty et al., 2001).
arXiv.org Artificial Intelligence
Jul-16-2024
- Country:
- Asia > Middle East
- Jordan (0.24)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: