Task adaption by biologically inspired stochastic comodulation

Boeshertz, Gauthier, Haimerl, Caroline, Savin, Cristina

arXiv.org Artificial Intelligence 

Brain representations must strike a balance between generalizability and adaptability. Neural codes capture general statistical regularities in the world, while dynamically adjusting to reflect current goals. One aspect of this adaptation is stochastically co-modulating neurons' gains based on their task relevance. These fluctuations then propagate downstream to guide decision making. Here, we test the computational viability of such a scheme in the context of multi-task learning. We show that fine-tuning convolutional networks by stochastic gain modulation improves on deterministic gain modulation, achieving state-of-the-art results on the CelebA dataset. To better understand the mechanisms supporting this improvement, we explore how fine-tuning performance is affected by architecture using Cifar-100. Overall, our results suggest that stochastic comodulation can enhance learning efficiency and performance in multi-task learning, without additional learnable parameters. The perception of the same sensory stimulus changes based on context. This perceptual adjustment arises as a natural trade-off between constructing reusable representations that capture core statistical regularities of inputs, and fine-tuning representations for mastery in a specific task.