Goto

Collaborating Authors

 algorithmic gap



We thank the referees for their interest in our paper and for their valuable comments that help us to make the paper 1 clearer

Neural Information Processing Systems

We analyzed the multi-layer case beyond what is reported in the submitted paper. Equations to get the optimal error in the multi-layer case are in page 10-11 of the SM. The vertical lines show the PCA and the optimal threshold respectively. Our claims of optimality of AMP are indeed limited to the cases investigated numerically. We will make a statement collecting all the assumptions in the final version.


Reviews: The spiked matrix model with generative priors

Neural Information Processing Systems

This paper investigates the matrix decomposition under the assumption that the spiked vector comes from a generated model. In particular, a single layer generating model with a linear/non-linear activation is considered. The authors study the phase transition on when the underlying spiked vector can be recovered, and shows that there is no algorithmic gap with generative-model priors, which is different from the sparse model. In addition, a new spectral method based on approximate messaging is proposed. The authors shows that this algorithm can reach the statistically optimal threshold. In general, this manuscript is well-written.


Lost in Translation: The Algorithmic Gap Between LMs and the Brain

Tosato, Tommaso, Notsawo, Pascal Jr Tikeng, Helbling, Saskia, Rish, Irina, Dumas, Guillaume

arXiv.org Artificial Intelligence

Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear. This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis, emphasizing the importance of looking beyond input-output behavior to examine and compare the internal processes of these systems. We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models. Furthermore, we explore the role of scaling laws in bridging the gap between LMs and human cognition, highlighting the need for efficiency constraints analogous to those in biological systems. By developing LMs that more closely mimic brain function, we aim to advance both artificial intelligence and our understanding of human cognition.