Goto

Collaborating Authors

 bump


Quantum Feature Space of a Qubit Coupled to an Arbitrary Bath

Wise, Chris, Youssry, Akram, Peruzzo, Alberto, Plested, Jo, Woolley, Matt

arXiv.org Artificial Intelligence

Qubit control protocols have traditionally leveraged a characterisation of the qubit-bath coupling via its power spectral density. Previous work proposed the inference of noise operators that characterise the influence of a classical bath using a grey-box approach that combines deep neural networks with physics-encoded layers. This overall structure is complex and poses challenges in scaling and real-time operations. Here, we show that no expensive neural networks are needed and that this noise operator description admits an efficient parameterisation. We refer to the resulting parameter space as the \textit{quantum feature space} of the qubit dynamics resulting from the coupled bath. We show that the Euclidean distance defined over the quantum feature space provides an effective method for classifying noise processes in the presence of a given set of controls. Using the quantum feature space as the input space for a simple machine learning algorithm (random forest, in this case), we demonstrate that it can effectively classify the stationarity and the broad class of noise processes perturbing a qubit. Finally, we explore how control pulse parameters map to the quantum feature space.




Supplementary Material: Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization

Neural Information Processing Systems

The dynamics of neural activity are described by a standard rate model. Note that only the third term of Eq. 'th place cell preferred firing position in the's are standard unit vectors spanning an orthonormal basis. To derive Eq. 3 we evaluate the derivative of Energy landscapes were uniformly shifted throughout the manuscript by a constant (Figs. For each network with a different number of total embedded maps, 15 realizations were performed in which the permutations between the spatial maps were chosen independently and at random. Code availability Code is available at public repository https://doi.org/10.5281/zenodo.10016179.


We are pleased that the different conceptual aspects of BoxE are clear, and that our experiments are

Neural Information Processing Systems

We thank the reviewers for their valuable and insightful feedback, and respond to their comments and questions below. Model expressivity and compression: The bound in Theorem 5.1 is a worst-case bound that is only tight when all KB In fact, higher-arity experiments (see Section 6.2) are Furthermore, we have evaluated model robustness in Appendix H.1, and Adam optimizer, and hyper-parameters (including negative samples) are in Table 6. We will mention this in the paper. Novelty of the model: BoxE is substantially different from any existing box model. We will make these differences more explicit in the paper.


Efficient Neural Networks with Discrete Cosine Transform Activations

Martinez-Gost, Marc, Pepe, Sara, Pérez-Neira, Ana, Lagunas, Miguel Ángel

arXiv.org Artificial Intelligence

In this paper, we extend our previous work on the Expressive Neural Network (ENN), a multilayer perceptron with adaptive activation functions parametrized using the Discrete Cosine Transform (DCT). Building upon previous work that demonstrated the strong expressiveness of ENNs with compact architectures, we now emphasize their efficiency, interpretability and pruning capabilities. The DCT-based parameterization provides a structured and decorrelated representation that reveals the functional role of each neuron and allows direct identification of redundant components. Leveraging this property, we propose an efficient pruning strategy that removes unnecessary DCT coefficients with negligible or no loss in performance. Experimental results across classification and implicit neural representation tasks confirm that ENNs achieve state-of-the-art accuracy while maintaining a low number of parameters. Furthermore, up to 40% of the activation coefficients can be safely pruned, thanks to the orthogonality and bounded nature of the DCT basis. Overall, these findings demonstrate that the ENN framework offers a principled integration of signal processing concepts into neural network design, achieving a balanced trade-off between expressiveness, compactness, and interpretability.


Supplementary Material: Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization

Neural Information Processing Systems

The dynamics of neural activity are described by a standard rate model. Note that only the third term of Eq. 'th place cell preferred firing position in the's are standard unit vectors spanning an orthonormal basis. To derive Eq. 3 we evaluate the derivative of Energy landscapes were uniformly shifted throughout the manuscript by a constant (Figs. For each network with a different number of total embedded maps, 15 realizations were performed in which the permutations between the spatial maps were chosen independently and at random. Code availability Code is available at public repository https://doi.org/10.5281/zenodo.10016179.



Spike Frequency Adaptation Implements Anticipative Tracking in Continuous Attractor Neural Networks

Yuanyuan Mi, C. C. Alan Fung, K. Y. Michael Wong, Si Wu

Neural Information Processing Systems

To extract motion information, the brain needs to compensate for time delays that are ubiquitous in neural signal transmission and processing. Here we propose a simple yet effective mechanism to implement anticipative tracking in neural systems. The proposed mechanism utilizes the property of spike-frequency adaptation (SFA), a feature widely observed in neuronal responses. We employ continuous attractor neural networks (CANNs) as the model to describe the tracking behaviors in neural systems. Incorporating SFA, a CANN exhibits intrinsic mobility, manifested by the ability of the CANN to support self-sustained travelling waves. In tracking a moving stimulus, the interplay between the external drive and the intrinsic mobility of the network determines the tracking performance. Interestingly, we find that the regime of anticipation effectively coincides with the regime where the intrinsic speed of the travelling wave exceeds that of the external drive. Depending on the SFA amplitudes, the network can achieve either perfect tracking, with zero-lag to the input, or perfect anticipative tracking, with a constant leading time to the input. Our model successfully reproduces experimentally observed anticipative tracking behaviors, and sheds light on our understanding of how the brain processes motion information in a timely manner.


Attractor Network Dynamics Enable Preplay and Rapid Path Planning in Maze–like Environments

Dane S. Corneil, Wulfram Gerstner

Neural Information Processing Systems

Rodents navigating in a well-known environment can rapidly learn and revisit observed reward locations, often after a single trial. While the mechanism for rapid path planning is unknown, the CA3 region in the hippocampus plays an important role, and emerging evidence suggests that place cell activity during hippocam-pal "preplay" periods may trace out future goal-directed trajectories. Here, we show how a particular mapping of space allows for the immediate generation of trajectories between arbitrary start and goal locations in an environment, based only on the mapped representation of the goal. We show that this representation can be implemented in a neural attractor network model, resulting in bump-like activity profiles resembling those of the CA3 region of hippocampus. Neurons tend to locally excite neurons with similar place field centers, while inhibiting other neurons with distant place field centers, such that stable bumps of activity can form at arbitrary locations in the environment. The network is initialized to represent a point in the environment, then weakly stimulated with an input corresponding to an arbitrary goal location. We show that the resulting activity can be interpreted as a gradient ascent on the value function induced by a reward at the goal location. Indeed, in networks with large place fields, we show that the network properties cause the bump to move smoothly from its initial location to the goal, around obstacles or walls. Our results illustrate that an attractor network with hippocampal-like attributes may be important for rapid path planning.