bijection
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > California > Los Angeles County (0.04)
- Europe > Poland (0.04)
Autoregressive Language Models are Secretly Energy-Based Models: Insights into the Lookahead Capabilities of Next-Token Prediction
Blondel, Mathieu, Sander, Michael E., Vivier-Ardisson, Germain, Liu, Tianlin, Roulet, Vincent
Autoregressive models (ARMs) currently constitute the dominant paradigm for large language models (LLMs). Energy-based models (EBMs) represent another class of models, which have historically been less prevalent in LLM development, yet naturally characterize the optimal policy in post-training alignment. In this paper, we provide a unified view of these two model classes. Taking the chain rule of probability as a starting point, we establish an explicit bijection between ARMs and EBMs in function space, which we show to correspond to a special case of the soft Bellman equation in maximum entropy reinforcement learning. Building upon this bijection, we derive the equivalence between supervised learning of ARMs and EBMs. Furthermore, we analyze the distillation of EBMs into ARMs by providing theoretical error bounds. Our results provide insights into the ability of ARMs to plan ahead, despite being based on the next-token prediction paradigm.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.88)
Even with AI, Bijection Discovery is Still Hard: The Opportunities and Challenges of OpenEvolve for Novel Bijection Construction
Brown, Davis, He, Jesse, Jenne, Helen, Kvinge, Henry, Vargas, Max
Evolutionary program synthesis systems such as AlphaEvolve, OpenEvolve, and ShinkaEvolve offer a new approach to AI-assisted mathematical discovery. These systems utilize teams of large language models (LLMs) to generate candidate solutions to a problem as human readable code. These candidate solutions are then 'evolved' with the goal of improving them beyond what an LLM can produce in a single shot. While existing mathematical applications have mostly focused on problems of establishing bounds (e.g., sphere packing), the program synthesis approach is well suited to any problem where the solution takes the form of an explicit construction. With this in mind, in this paper we explore the use of OpenEvolve for combinatorial bijection discovery. We describe the results of applying OpenEvolve to three bijection construction problems involving Dyck paths, two of which are known and one of which is open. We find that while systems like OpenEvolve show promise as a valuable tool for combinatorialists, the problem of finding novel, research-level bijections remains a challenging task for current frontier systems, reinforcing the need for human mathematicians in the loop. We describe some lessons learned for others in the field interested in exploring the use of these systems.
- North America > United States > Pennsylvania (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > Canada > Alberta (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
that (1) there is a bijection between state spaces and (2) through which the subMDPs have the same transition/reward
We thank all reviewers for spending their valuable time reviewing our paper. We now answer some specific question in detail. The definition of "equivalent subMDPs" (Definition 2) requires As discussed in the paper, (2) can be relaxed to similar transition/reward models. For the statistical efficiency results, this assumption could be relaxed, e.g. if a However, it is beyond the scope of this paper and we aim to address it in future work. We will add a more explicit discussion about the comparison to Mann et al. (2015) Theorem 1 in this paper is partially motivated by Osband et al. (2013); however, we consider a very different setting and Specifically, (1) Theorem 1 considers hierarchical structure while Osband et al.
We thank the reviewers for their feedback and are glad that they found the paper to be clear, novel, and a well motivated
We will incorporate the answers/other feedback into the revised manuscript. The general quality of samples seems to be negatively impacted. We agree other wavelets could be potentially interesting. SR is not claimed as our primary goal/contribution. Rather, it is a fortuitous byproduct of the conditional structure that WF enables. A more thorough exploration of WF for SR is a promising direction for future work.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
145c28cd4b1df9b426990fd68045f4f7-Supplemental-Conference.pdf
Therefore, take an arbitrary k { 0, 1,...,n } and k Proof of Lemma 2. By Lemma 1, we have λ( π Proof of Lemma 3. We prove this lemma by backward induction on k . We want to show that our statement holds for k = K . Proof of Theorem 2. First fix the underlying parameters of the RMJ-based ranking model We break the rest of the proof into two parts: the "if" part and the "only if" part. We provide the proof details in Appendix B.2. Lemma 4 ˆ q q We claim that it does not hold that ˆ π e almost surely as T . The rest of proof consists of two steps.
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)