neuron 1
Decoding specialised feature neurons in LLMs with the final projection layer
Large Language Models (LLMs) typically have billions of parameters and are thus often difficult to interpret in their operation. Such black-box models can pose a significant risk to safety when trusted to make important decisions. The lack of interpretability of LLMs is more related to their sheer size, rather than the complexity of their individual components. The TARS method for knowledge removal (Davies et al 2024) provides strong evidence for the hypothesis that that linear layer weights which act directly on the residual stream may have high correlation with different concepts encoded in the residual stream. Building upon this, we attempt to decode neuron weights directly into token probabilities through the final projection layer of the model (the LM-head). Firstly, we show that with Llama 3.1 8B we can utilise the LM-head to decode specialised feature neurons that respond strongly to certain concepts, with examples such as "dog" and "California". This is then confirmed by demonstrating that these neurons can be clamped to affect the probability of the concept in the output. This extends to the fine-tuned assistant Llama 3.1 8B instruct model, where we find that over 75% of neurons in the up-projection layers have the same top associated token compared to the pretrained model. Finally, we demonstrate that clamping the "dog" neuron leads the instruct model to always discuss dogs when asked about its favourite animal. Through our method, it is possible to map the entirety of Llama 3.1 8B's up-projection neurons in less than 15 minutes with no parallelization.
- North America > United States > California (0.29)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay > Golden Gate (0.05)
Bio-Inspired Adaptive Neurons for Dynamic Weighting in Artificial Neural Networks
Islam, Ashhadul, Bouzerdoum, Abdesselam, Belhaouari, Samir Brahim
Traditional neural networks employ fixed weights during inference, limiting their ability to adapt to changing input conditions, unlike biological neurons that adjust signal strength dynamically based on stimuli. This discrepancy between artificial and biological neurons constrains neural network flexibility and adaptability. To bridge this gap, we propose a novel framework for adaptive neural networks, where neuron weights are modeled as functions of the input signal, allowing the network to adjust dynamically in real-time. Importantly, we achieve this within the same traditional architecture of an Artificial Neural Network, maintaining structural familiarity while introducing dynamic adaptability. In our research, we apply Chebyshev polynomials as one of the many possible decomposition methods to achieve this adaptive weighting mechanism, with polynomial coefficients learned during training. Out of the 145 datasets tested, our adaptive Chebyshev neural network demonstrated a marked improvement over an equivalent MLP in approximately 8\% of cases, performing strictly better on 121 datasets. In the remaining 24 datasets, the performance of our algorithm matched that of the MLP, highlighting its ability to generalize standard neural network behavior while offering enhanced adaptability. As a generalized form of the MLP, this model seamlessly retains MLP performance where needed while extending its capabilities to achieve superior accuracy across a wide range of complex tasks. These results underscore the potential of adaptive neurons to enhance generalization, flexibility, and robustness in neural networks, particularly in applications with dynamic or non-linear data dependencies.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- North America > United States > Wisconsin (0.04)
- (5 more...)
Adaptive stimulus selection for optimizing neural population responses
Benjamin Cowley, Ryan Williamson, Katerina Clemens, Matthew Smith, Byron M. Yu
Adaptive stimulus selection methods in neuroscience have primarily focused on maximizing the firing rate of a single recorded neuron. When recording from a population of neurons, it is usually not possible to find a single stimulus that maximizes the firing rates of all neurons. This motivates optimizing an objective function that takes into account the responses of all recorded neurons together. We propose "Adept," an adaptive stimulus selection method that can optimize population objective functions. In simulations, we first confirmed that population objective functions elicited more diverse stimulus responses than single-neuron objective functions. Then, we tested Adept in a closed-loop electrophysiological experiment in which population activity was recorded from macaque V4, a cortical area known for mid-level visual processing. To predict neural responses, we used the outputs of a deep convolutional neural network model as feature embeddings. Natural images chosen by Adept elicited mean neural responses that were 20% larger than those for randomly-chosen natural images, and also evoked a larger diversity of neural responses. Such adaptive stimulus selection methods can facilitate experiments that involve neurons far from the sensory periphery, for which it is often unclear which stimuli to present.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
An eight-neuron network for quadruped locomotion with hip-knee joint control
Liu, Yide, Liu, Xiyan, Wang, Dongqi, Yang, Wei, Qu, shaoxing
The gait generator, which is capable of producing rhythmic signals for coordinating multiple joints, is an essential component in the quadruped robot locomotion control framework. The biological counterpart of the gait generator is the Central Pattern Generator (abbreviated as CPG), a small neural network consisting of interacting neurons. Inspired by this architecture, researchers have designed artificial neural networks composed of simulated neurons or oscillator equations. Despite the widespread application of these designed CPGs in various robot locomotion controls, some issues remain unaddressed, including: (1) Simplistic network designs often overlook the symmetry between signal and network structure, resulting in fewer gait patterns than those found in nature. (2) Due to minimal architectural consideration, quadruped control CPGs typically consist of only four neurons, which restricts the network's direct control to leg phases rather than joint coordination. (3) Gait changes are achieved by varying the neuron couplings or the assignment between neurons and legs, rather than through external stimulation. We apply symmetry theory to design an eight-neuron network, composed of Stein neuronal models, capable of achieving five gaits and coordinated control of the hip-knee joints. We validate the signal stability of this network as a gait generator through numerical simulations, which reveal various results and patterns encountered during gait transitions using neuronal stimulation. Based on these findings, we have developed several successful gait transition strategies through neuronal stimulations. Using a commercial quadruped robot model, we demonstrate the usability and feasibility of this network by implementing motion control and gait transitions.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > New York (0.04)
- North America > United States > Arizona > Maricopa County > Scottsdale (0.04)
- (3 more...)
Geometry and Dynamics of LayerNorm
A technical note aiming to offer deeper intuition for the LayerNorm function common in deep neural networks. LayerNorm is defined relative to a distinguished 'neural' basis, but it does more than just normalize the corresponding vector elements. Rather, it implements a composition -- of linear projection, nonlinear scaling, and then affine transformation -- on input activation vectors. We develop both a new mathematical expression and geometric intuition, to make the net effect more transparent. We emphasize that, when LayerNorm acts on an N-dimensional vector space, all outcomes of LayerNorm lie within the intersection of an (N-1)-dimensional hyperplane and the interior of an N-dimensional hyperellipsoid. This intersection is the interior of an (N-1)-dimensional hyperellipsoid, and typical inputs are mapped near its surface. We find the direction and length of the principal axes of this (N-1)-dimensional hyperellipsoid via the eigen-decomposition of a simply constructed matrix.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Alameda County > Berkeley (0.04)
A Beginners Guide to Neural Nets
To give the NN some context we will consider one of the original problems it was applied to - handwritten digit recognition [4]. This problem is often one of the first encountered by students of Deep Learning as it represents a problem that is hard to solve with traditional machine learning methods. Imagine that we are tasked to write a computer program that can identify handwritten digits. Each image we receive will be 28 x 28 pixels and we will also have access to the correct label for that image. The first thing we do is set up our neural network as in the figure below.
MachineX: The alphabets of Artificial Neural Network
In this blog, we will talk about Neural network which is the base of deep learning which gave machine learning and ultra edge in the current AI revolution. Let's go ahead and tell us more about Neural networks. Neuron is a computational unit which takes the input('s), does some calculations and produces the output. Above, within the figure is the one we tend to use in Neural Network. It will produce the result (which would be a continuous value -infinity to infinity).
Adaptive stimulus selection for optimizing neural population responses
Cowley, Benjamin, Williamson, Ryan, Clemens, Katerina, Smith, Matthew, Yu, Byron M.
Adaptive stimulus selection methods in neuroscience have primarily focused on maximizing the firing rate of a single recorded neuron. When recording from a population of neurons, it is usually not possible to find a single stimulus that maximizes the firing rates of all neurons. This motivates optimizing an objective function that takes into account the responses of all recorded neurons together. We propose “Adept,” an adaptive stimulus selection method that can optimize population objective functions. In simulations, we first confirmed that population objective functions elicited more diverse stimulus responses than single-neuron objective functions. Then, we tested Adept in a closed-loop electrophysiological experiment in which population activity was recorded from macaque V4, a cortical area known for mid-level visual processing. To predict neural responses, we used the outputs of a deep convolutional neural network model as feature embeddings. Images chosen by Adept elicited mean neural responses that were 20% larger than those for randomly-chosen natural images, and also evoked a larger diversity of neural responses. Such adaptive stimulus selection methods can facilitate experiments that involve neurons far from the sensory periphery, for which it is often unclear which stimuli to present.
Single-trial estimation of stimulus and spike-history effects on time-varying ensemble spiking activity of multiple neurons: a simulation study
Neurons in cortical circuits exhibit coordinated spiking activity, and can produce correlated synchronous spikes during behavior and cognition. We recently developed a method for estimating the dynamics of correlated ensemble activity by combining a model of simultaneous neuronal interactions (e.g., a spin-glass model) with a state-space method (Shimazaki et al. 2012 PLoS Comput Biol 8 e1002385). This method allows us to estimate stimulus-evoked dynamics of neuronal interactions which is reproducible in repeated trials under identical experimental conditions. However, the method may not be suitable for detecting stimulus responses if the neuronal dynamics exhibits significant variability across trials. In addition, the previous model does not include effects of past spiking activity of the neurons on the current state of ensemble activity. In this study, we develop a parametric method for simultaneously estimating the stimulus and spike-history effects on the ensemble activity from single-trial data even if the neurons exhibit dynamics that is largely unrelated to these effects. For this goal, we model ensemble neuronal activity as a latent process and include the stimulus and spike-history effects as exogenous inputs to the latent process. We develop an expectation-maximization algorithm that simultaneously achieves estimation of the latent process, stimulus responses, and spike-history effects. The proposed method is useful to analyze an interaction of internal cortical states and sensory evoked activity.