Goto

Collaborating Authors

 bijection


Improving Compositional Generalization using Iterated Learning and Simplicial Embeddings

Neural Information Processing Systems

Compositional generalization, the ability of an agent to generalize to unseen combinations of latent factors, is easy for humans but hard for deep neural networks. A line of research in cognitive science has hypothesized a process, "iterated learning,"



Autoregressive Language Models are Secretly Energy-Based Models: Insights into the Lookahead Capabilities of Next-Token Prediction

Blondel, Mathieu, Sander, Michael E., Vivier-Ardisson, Germain, Liu, Tianlin, Roulet, Vincent

arXiv.org Machine Learning

Autoregressive models (ARMs) currently constitute the dominant paradigm for large language models (LLMs). Energy-based models (EBMs) represent another class of models, which have historically been less prevalent in LLM development, yet naturally characterize the optimal policy in post-training alignment. In this paper, we provide a unified view of these two model classes. Taking the chain rule of probability as a starting point, we establish an explicit bijection between ARMs and EBMs in function space, which we show to correspond to a special case of the soft Bellman equation in maximum entropy reinforcement learning. Building upon this bijection, we derive the equivalence between supervised learning of ARMs and EBMs. Furthermore, we analyze the distillation of EBMs into ARMs by providing theoretical error bounds. Our results provide insights into the ability of ARMs to plan ahead, despite being based on the next-token prediction paradigm.


Even with AI, Bijection Discovery is Still Hard: The Opportunities and Challenges of OpenEvolve for Novel Bijection Construction

Brown, Davis, He, Jesse, Jenne, Helen, Kvinge, Henry, Vargas, Max

arXiv.org Artificial Intelligence

Evolutionary program synthesis systems such as AlphaEvolve, OpenEvolve, and ShinkaEvolve offer a new approach to AI-assisted mathematical discovery. These systems utilize teams of large language models (LLMs) to generate candidate solutions to a problem as human readable code. These candidate solutions are then 'evolved' with the goal of improving them beyond what an LLM can produce in a single shot. While existing mathematical applications have mostly focused on problems of establishing bounds (e.g., sphere packing), the program synthesis approach is well suited to any problem where the solution takes the form of an explicit construction. With this in mind, in this paper we explore the use of OpenEvolve for combinatorial bijection discovery. We describe the results of applying OpenEvolve to three bijection construction problems involving Dyck paths, two of which are known and one of which is open. We find that while systems like OpenEvolve show promise as a valuable tool for combinatorialists, the problem of finding novel, research-level bijections remains a challenging task for current frontier systems, reinforcing the need for human mathematicians in the loop. We describe some lessons learned for others in the field interested in exploring the use of these systems.



that (1) there is a bijection between state spaces and (2) through which the subMDPs have the same transition/reward

Neural Information Processing Systems

We thank all reviewers for spending their valuable time reviewing our paper. We now answer some specific question in detail. The definition of "equivalent subMDPs" (Definition 2) requires As discussed in the paper, (2) can be relaxed to similar transition/reward models. For the statistical efficiency results, this assumption could be relaxed, e.g. if a However, it is beyond the scope of this paper and we aim to address it in future work. We will add a more explicit discussion about the comparison to Mann et al. (2015) Theorem 1 in this paper is partially motivated by Osband et al. (2013); however, we consider a very different setting and Specifically, (1) Theorem 1 considers hierarchical structure while Osband et al.


We thank the reviewers for their feedback and are glad that they found the paper to be clear, novel, and a well motivated

Neural Information Processing Systems

We will incorporate the answers/other feedback into the revised manuscript. The general quality of samples seems to be negatively impacted. We agree other wavelets could be potentially interesting. SR is not claimed as our primary goal/contribution. Rather, it is a fortuitous byproduct of the conditional structure that WF enables. A more thorough exploration of WF for SR is a promising direction for future work.



145c28cd4b1df9b426990fd68045f4f7-Supplemental-Conference.pdf

Neural Information Processing Systems

Therefore, take an arbitrary k { 0, 1,...,n } and k Proof of Lemma 2. By Lemma 1, we have λ( π Proof of Lemma 3. We prove this lemma by backward induction on k . We want to show that our statement holds for k = K . Proof of Theorem 2. First fix the underlying parameters of the RMJ-based ranking model We break the rest of the proof into two parts: the "if" part and the "only if" part. We provide the proof details in Appendix B.2. Lemma 4 ˆ q q We claim that it does not hold that ˆ π e almost surely as T . The rest of proof consists of two steps.