Goto

Collaborating Authors

 Renner, Alpha


A Grid Cell-Inspired Structured Vector Algebra for Cognitive Maps

arXiv.org Artificial Intelligence

The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells. This system is well-studied in neuroscience, and its efficiency and versatility make it attractive for applications in robotics and machine learning. While continuous attractor networks (CANs) successfully model entorhinal grid cells for encoding physical space, integrating both continuous spatial and abstract spatial computations into a unified framework remains challenging. Here, we attempt to bridge this gap by proposing a mechanistic model for versatile information processing in the entorhinal-hippocampal formation inspired by CANs and Vector Symbolic Architectures (VSAs), a neuro-symbolic computing framework. The novel grid-cell VSA (GC-VSA) model employs a spatially structured encoding scheme with 3D neuronal modules mimicking the discrete scales and orientations of grid cell modules, reproducing their characteristic hexagonal receptive fields. In experiments, the model demonstrates versatility in spatial and abstract tasks: (1) accurate path integration for tracking locations, (2) spatio-temporal representation for querying object locations and temporal relations, and (3) symbolic reasoning using family trees as a structured test case for hierarchical relationships.


Distributed Representations Enable Robust Multi-Timescale Computation in Neuromorphic Hardware

arXiv.org Artificial Intelligence

Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we show how the distributed approach offered by vector symbolic architectures (VSAs), which uses high-dimensional random vectors as the smallest units of representation, can be leveraged to embed robust multi-timescale dynamics into attractor-based RSNNs. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms. The transition terms are formed by the VSA binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly non-ideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work demonstrates the effectiveness of VSA representations for embedding robust computation with recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. This advances VSAs as a high-level representation-invariant abstract language for cognitive algorithms in neuromorphic hardware.


Neuromorphic Visual Scene Understanding with Resonator Networks

arXiv.org Artificial Intelligence

Understanding a visual scene by inferring identities and poses of its individual objects is still and open problem. Here we propose a neuromorphic solution that utilizes an efficient factorization network based on three key concepts: (1) a computational framework based on Vector Symbolic Architectures (VSA) with complex-valued vectors; (2) the design of Hierarchical Resonator Networks (HRN) to deal with the non-commutative nature of translation and rotation in visual scenes, when both are used in combination; (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued resonator networks on neuromorphic hardware. The VSA framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. The HRN enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. Our approach is demonstrated on synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes. A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics.


NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

arXiv.org Artificial Intelligence

The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics.


Event-driven Vision and Control for UAVs on a Neuromorphic Chip

arXiv.org Artificial Intelligence

Event-based vision sensors achieve up to three orders of magnitude better speed vs. power consumption trade off in high-speed control of UAVs compared to conventional image sensors. Event-based cameras produce a sparse stream of events that can be processed more efficiently and with a lower latency than images, enabling ultra-fast vision-driven control. Here, we explore how an event-based vision algorithm can be implemented as a spiking neuronal network on a neuromorphic chip and used in a drone controller. We show how seamless integration of event-based perception on chip leads to even faster control rates and lower latency. In addition, we demonstrate how online adaptation of the SNN controller can be realised using on-chip learning. Our spiking neuronal network on chip is the first example of a neuromorphic vision-based controller solving a high-speed UAV control task. The excellent scalability of processing in neuromorphic hardware opens the possibility to solve more challenging visual tasks in the future and integrate visual perception in fast control loops.


The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware

arXiv.org Artificial Intelligence

There is particular interest in Spike-based learning in plastic neuronal networks is deep learning, which is a central tool in modern machine playing increasingly key roles in both theoretical neuroscience learning. Deep learning relies on a layered, feedforward and neuromorphic computing. The brain learns network similar to the early layers of the visual cortex, in part by modifying the synaptic strengths between neurons with threshold nonlinearities at each layer that resemble and neuronal populations. While specific synaptic mean-field approximations of neuronal integrate-and-fire plasticity or neuromodulatory mechanisms may vary in models. While feedforward networks are readily translated different brain regions, it is becoming clear that a significant to neuromorphic hardware [6-8], the far more computationally level of dynamical coordination between disparate intensive training of these networks'on chip' neuronal populations must exist, even within an individual has proven elusive as the structure of backpropagation neural circuit [1]. Classically, backpropagation (BP, makes the algorithm notoriously difficult to implement and other learning algorithms) has been essential for supervised in a neural circuit [9, 10]. A feasible neural implementation learning in artificial neural networks (ANNs). of the backpropagation algorithm has gained renewed Although the question of whether or not BP operates in scrutiny with the rise of new neuromorphic computational the brain is still an outstanding issue [2], BP does solve architectures that feature local synaptic plasticity the problem of how a global objective function can be [5, 11-13]. Because of the well-known difficulties, neuromorphic related to local synaptic modification in a network.