Goto

Collaborating Authors

 Maximum Entropy



Connectionist Temporal Classification with Maximum Entropy Regularization

Hu Liu, Sheng Jin, Changshui Zhang

Neural Information Processing Systems

However, CTC tends to produce highly peaky and overconfident distributions, which is a symptom of overfitting. To remedy this, we propose a regularization method based on maximum conditional entropy which penalizes peaky distributions and encourages exploration.


Mind Your Entropy: From Maximum Entropy to Trajectory Entropy-Constrained RL

Zhan, Guojian, Wang, Likun, Wang, Pengcheng, Zhang, Feihong, Duan, Jingliang, Tomizuka, Masayoshi, Li, Shengbo Eben

arXiv.org Machine Learning

Maximum entropy has become a mainstream off-policy reinforcement learning (RL) framework for balancing exploitation and exploration. However, two bottlenecks still limit further performance improvement: (1) non-stationary Q-value estimation caused by jointly injecting entropy and updating its weighting parameter, i.e., temperature; and (2) short-sighted local entropy tuning that adjusts temperature only according to the current single-step entropy, without considering the effect of cumulative entropy over time. In this paper, we extends maximum entropy framework by proposing a trajectory entropy-constrained reinforcement learning (TECRL) framework to address these two challenges. Within this framework, we first separately learn two Q-functions, one associated with reward and the other with entropy, ensuring clean and stable value targets unaffected by temperature updates. Then, the dedicated entropy Q-function, explicitly quantifying the expected cumulative entropy, enables us to enforce a trajectory entropy constraint and consequently control the policy long-term stochasticity. Building on this TECRL framework, we develop a practical off-policy algorithm, DSAC-E, by extending the state-of-the-art distributional soft actor-critic with three refinements (DSAC-T). Empirical results on the OpenAI Gym benchmark demonstrate that our DSAC-E can achieve higher returns and better stability.



Distributional Policy Evaluation: a Maximum Entropy approach to Representation Learning

Neural Information Processing Systems

In Distributional Reinforcement Learning (D-RL) [Bellemare et al., 2023], an agent aims to estimate Sutton and Barto, 2018], where the objective is to predict the expected return only. In Section 3, we answer this methodological question, showing that it is possible to reformulate Policy Evaluation in a distributional setting so that its performance index is explicitly intertwined with the representation of the (state or action) spaces.




Deriving the Scaled-Dot-Function via Maximum Likelihood Estimation and Maximum Entropy Approach

Ma, Jiyong

arXiv.org Artificial Intelligence

In this paper, we present a maximum likelihood estimation approach to determine the value vector in transformer models. We model the sequence of value vectors, key vectors, and the query vector as a sequence of Gaussian distributions. The variance in each Gaussian distribution depends on the time step, the corresponding key vector, and the query vector. The mean value in each Gaussian distribution depends on the time step, and the corresponding value vector. This analysis may offer a new explanation of the scaled-dot-product function or softmax function used in transformer architectures [1]. Another explanation, inspired by [4], is based on the maximum entropy approach in natural language processing [5]. In this approach, a query vector and key vectors are used to derive the feature functions for the maximum entropy model.


Hierarchical Maximum Entropy via the Renormalization Group

Asadi, Amir R.

arXiv.org Artificial Intelligence

Hierarchical structures, which include multiple levels, are prevalent in statistical and machine-learning models as well as physical systems. Extending the foundational result that the maximum entropy distribution under mean constraints is given by the exponential Gibbs-Boltzmann form, we introduce the framework of "hierarchical maximum entropy" to address these multilevel models. We demonstrate that Pareto optimal distributions, which maximize entropies across all levels of hierarchical transformations, can be obtained via renormalization-group procedures from theoretical physics. This is achieved by formulating multilevel extensions of the Gibbs variational principle and the Donsker-Varadhan variational representation of entropy. Moreover, we explore settings with hierarchical invariances that significantly simplify the renormalization-group procedures, enhancing computational efficiency: quadratic modular loss functions, logarithmic loss functions, and nearest-neighbor loss functions. This is accomplished through the introduction of the concept of parameter flows, which serves as an analog to renormalization flows in renormalization group theory. This work connects ideas from probability theory, information theory, and statistical mechanics.