Goto

Collaborating Authors

 generalisation


Generalisation of structural knowledge in the hippocampal-entorhinal system

Neural Information Processing Systems

A central problem to understanding intelligence is the concept of generalisation. This allows previously learnt structure to be exploited to solve tasks in novel situations differing in their particularities. We take inspiration from neuroscience, specifically the hippocampal-entorhinal system known to be important for generalisation. We propose that to generalise structural knowledge, the representations of the structure of the world, i.e. how entities in the world relate to each other, need to be separated from representations of the entities themselves. We show, under these principles, artificial neural networks embedded with hierarchy and fast Hebbian memory, can learn the statistics of memories and generalise structural knowledge. Spatial neuronal representations mirroring those found in the brain emerge, suggesting spatial cognition is an instance of more general organising principles. We further unify many entorhinal cell types as basis functions for constructing transition graphs, and show these representations effectively utilise memories. We experimentally support model assumptions, showing a preserved relationship between entorhinal grid and hippocampal place cells across environments.



Learning via Wasserstein-Based High Probability Generalisation Bounds

Neural Information Processing Systems

The authors contributed equally to this work 37th Conference on Neural Information Processing Systems (NeurIPS 2023). Developing upper bounds on the generalisation gap, i.e., generalisation bounds has been a longstanding topic in statistical learning.



Self-Supervised Generalisation with Meta Auxiliary Learning

Shikun Liu, Andrew Davison, Edward Johns

Neural Information Processing Systems

We showthatourproposedmethod,MetaAuXiliaryLearning(MAXL),outperforms single-task learning on 7 image datasets, without requiring any additional data. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. Source code can be found at https://github.com/lorenmt/maxl.




The MAGICAL Benchmark for Robust Imitation

Neural Information Processing Systems

The robot could learn from these demonstrations to complete the tasks autonomously. For IL algorithms to be useful, however, they must be able to learn how to perform tasks from few demonstrations. A domestic robot wouldn't be very helpful if it required thirty demonstrations before it figured out that you are deliberately washing your purple cravat