Singh, Anand
Learning compositionally through attentive guidance
Hupkes, Dieuwke, Singh, Anand, Korrel, Kris, Kruszewski, German, Bruni, Elia
In this paper, we introduce Attentive Guidance (AG), a new mechanism to direct a sequence to sequence model equipped with attention to find more compositional solutions that generalise even in cases where the training and testing distribution strongly diverge. We test AG on two tasks, devised precisely to asses the composi- tional capabilities of neural models and show how vanilla sequence to sequence models with attention overfit the training distribution, while the guided versions come up with compositional solutions that, in some cases, fit the training and testing distributions equally well. AG is a simple and intuitive method to provide a learning bias to a sequence to sequence model without the need of including extra components, that we believe allows to inject a component in the training process which is also present in human learning: guidance.
Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models
Singh, Anand, Jolivet, Renaud, Magistretti, Pierre, Weber, Bruno
Sodium entry during an action potential determines the energy efficiency of a neuron. The classic Hodgkin-Huxley model of action potential generation is notoriously inefficient in that regard with about 4 times more charges flowing through the membrane than the theoretical minimum required to achieve the observed depolarization. Yet, recent experimental results show that mammalian neurons are close to the optimal metabolic efficiency and that the dynamics of their voltage-gated channels is significantly different than the one exhibited by the classic Hodgkin-Huxley model during the action potential. Nevertheless, the original Hodgkin-Huxley model is still widely used and rarely to model the squid giant axon from which it was extracted. Here, we introduce a novel family of Hodgkin-Huxley models that correctly account for sodium entry, action potential width and whose voltage-gated channels display a dynamics very similar to the most recent experimental observations in mammalian neurons. We speak here about a family of models because the model is parameterized by a unique parameter the variations of which allow to reproduce the entire range of experimental observations from cortical pyramidal neurons to Purkinje cells, yielding a very economical framework to model a wide range of different central neurons. The present paper demonstrates the performances and discuss the properties of this new family of models.