Goto

Collaborating Authors

 hadamard product


Energy Consumption Analysis Details

Neural Information Processing Systems

The spike firing rate is defined as the proportion of non-zero elements in the spike tensor. In Table S1, we present the spike firing rates for all spiking tensors in spike-driven Transformer-8-512. SNNs are theoretically more energy efficient than counterpart ANNs. We employ two types of datasets: static image classification and neuromorphic classification. ImageNet-1K is the most typical static image dataset, which is widely used in the field of image classification.


AT echnical Proofs Proof of Proposition 4.1.. Using the chain rule, (1), and the definitions of null

Neural Information Processing Systems

This appendix presents the technical details of efficiently implementing Algorithm 2. B.1 Computing Intermediate Quantities We argue that in the setting of neural networks, Algorithm 2 can obtain the intermediate quantities ζ Algorithm 3 gives a subroutine for computing the necessary scalars used in the efficient squared norm function of the embedding layer.Algorithm 3 Computing the Nonzero V alues of n In the former case, it is straightforward to see that we incur a compute (resp. F .1 Effect of Batch Size on Fully-Connected Layers Figure 4 presents numerical results for the same set of experiments as in Subsection 5.1 but for different batch sizes |B | instead of the output dimension q . Similar to Subsection 5.1, the results in Figure 4 are more favorable towards Adjoint compared to GhostClip.


ExtrapolationandSpectralBiasofNeuralNetswith HadamardProduct:aPolynomialNetStudy

Neural Information Processing Systems

Weprovetheir equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK. Based on our results, we elucidate the separation ofPNNs overstandard neural networks with respect toextrapolation andspectralbias.


Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study

Neural Information Processing Systems

Neural tangent kernel (NTK) is a powerful tool to analyze training dynamics of neural networks and their generalization bounds. The study on NTK has been devoted to typical neural network architectures, but it is incomplete for neural networks with Hadamard products (NNs-Hp), e.g., StyleGAN and polynomial neural networks (PNNs). In this work, we derive the finite-width NTK formulation for a special class of NNs-Hp, i.e., polynomial neural networks. We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK. Based on our results, we elucidate the separation of PNNs over standard neural networks with respect to extrapolation and spectral bias. Our two key insights are that when compared to standard neural networks, PNNs can fit more complicated functions in the extrapolation regime and admit a slower eigenvalue decay of the respective NTK, leading to a faster learning towards high-frequency functions. Besides, our theoretical results can be extended to other types of NNs-Hp, which expand the scope of our work. Our empirical results validate the separations in broader classes of NNs-Hp, which provide a good justification for a deeper understanding of neural architectures.


Memory-DD: A Low-Complexity Dendrite-Inspired Neuron for Temporal Prediction Tasks

Yang, Dongjian, Li, Xiaoyuan, Xi, Chuanmei, Sun, Ye, Liu, Gang

arXiv.org Artificial Intelligence

Abstract--Dendrite-inspired neurons have been widely used in tasks such as image classification due to low computational complexity and fast inference speed. T emporal data prediction, as a key machine learning task, plays a key role in real-time scenarios such as sensor data analysis, financial forecasting, and urban traffic management. However, existing dendrite-inspired neurons are mainly designed for static data. Studies on capturing dynamic features and modeling long-term dependencies in temporal sequences remain limited. Efficient architectures specifically designed for temporal sequence prediction are still lacking. In this paper, we propose Memory-DD, a low-complexity dendrite-inspired neuron model. Memory-DD consists of two dendrite-inspired neuron groups that contain no nonlinear activation functions but can still realize nonlinear mappings. Compared with traditional neurons without dendritic functions, Memory-DD requires only two neuron groups to extract logical relationships between features in input sequences. This design effectively captures temporal dependencies and is suitable for both classification and regression tasks on sequence data. Experimental results show that Memory-DD achieves an average accuracy of 89.41% on 18 temporal classification benchmark datasets, outperforming LSTM by 4.25%. On 9 temporal regression datasets, it reaches comparable performance to LSTM, while using only 50% of the parameters and reducing computational complexity (FLOPs) by 27.7%. These results demonstrate that Memory-DD successfully extends the low-complexity advantages of dendrite-inspired neurons to temporal prediction, providing a low-complexity and efficient solution for time-series data processing. ITH the rapid development of information technology, massive temporal sequence data have become the foundation of modern society, ranging from industrial IoT to financial markets.


MedFuse: Multiplicative Embedding Fusion For Irregular Clinical Time Series

Hsieh, Yi-Hsien, Chien, Ta-Jung, Huang, Chun-Kai, Sun, Shao-Hua, Lin, Che

arXiv.org Artificial Intelligence

Clinical time series derived from electronic health records (EHRs) are inherently irregular, with asynchronous sampling, missing values, and heterogeneous feature dynamics. While numerical laboratory measurements are highly informative, existing embedding strategies usually combine feature identity and value embeddings through additive operations, which constrains their ability to capture value-dependent feature interactions. We propose MedFuse, a framework for irregular clinical time series centered on the MuFuse (Multiplicative Embedding Fusion) module. MuFuse fuses value and feature embeddings through multiplicative modulation, preserving feature-specific information while modeling higher-order dependencies across features. Experiments on three real-world datasets covering both intensive and chronic care show that MedFuse consistently outperforms state-of-the-art baselines on key predictive tasks. Analysis of the learned representations further demonstrates that multiplicative fusion enhances expressiveness and supports cross-dataset pretraining. These results establish MedFuse as a generalizable approach for modeling irregular clinical time series.


AT echnical Proofs Proof of Proposition 4.1.. Using the chain rule, (1), and the definitions of null

Neural Information Processing Systems

This appendix presents the technical details of efficiently implementing Algorithm 2. B.1 Computing Intermediate Quantities We argue that in the setting of neural networks, Algorithm 2 can obtain the intermediate quantities ζ Algorithm 3 gives a subroutine for computing the necessary scalars used in the efficient squared norm function of the embedding layer.Algorithm 3 Computing the Nonzero V alues of n In the former case, it is straightforward to see that we incur a compute (resp. F .1 Effect of Batch Size on Fully-Connected Layers Figure 4 presents numerical results for the same set of experiments as in Subsection 5.1 but for different batch sizes |B | instead of the output dimension q . Similar to Subsection 5.1, the results in Figure 4 are more favorable towards Adjoint compared to GhostClip.


ABBA-Adapters: Efficient and Expressive Fine-Tuning of Foundation Models

Singhal, Raghav, Ponkshe, Kaustubh, Vartak, Rohit, Vepakomma, Praneeth

arXiv.org Artificial Intelligence

Large Language Models have demonstrated strong performance across a wide range of tasks, but adapting them efficiently to new domains remains a key challenge. Parameter-Efficient Fine-Tuning (PEFT) methods address this by introducing lightweight, trainable modules while keeping most pre-trained weights fixed. The prevailing approach, LoRA, models updates using a low-rank decomposition, but its expressivity is inherently constrained by the rank. Recent methods like HiRA aim to increase expressivity by incorporating a Hadamard product with the frozen weights, but still rely on the structure of the pre-trained model. We introduce ABBA, a new PEFT architecture that reparameterizes the update as a Hadamard product of two independently learnable low-rank matrices. In contrast to prior work, ABBA fully decouples the update from the pre-trained weights, enabling both components to be optimized freely. This leads to significantly higher expressivity under the same parameter budget, a property we validate through matrix reconstruction experiments. Empirically, ABBA achieves state-of-the-art results on arithmetic and commonsense reasoning benchmarks, consistently outperforming existing PEFT methods by a significant margin across multiple models. Our code is publicly available at: https://github.com/CERT-Lab/abba.


Blockwise Hadamard high-Rank Adaptation for Parameter-Efficient LLM Fine-Tuning

Yu, Feng, Hu, Jia, Min, Geyong

arXiv.org Artificial Intelligence

Parameter-efficient fine-tuning (PEFT) methods must be resource-efficient yet handle heterogeneous reasoning transformations, and classical low-rank adaptation (LoRA) is constrained by the nominal rank $r$. Hadamard-style extensions like HiRA raise the nominal rank but couple every update to the global energy pattern of the frozen weight matrix, while ABBA trades this inductive bias for fully learned dense intermediates. To address the limitation of global modulation, we propose Block Hadamard high-Rank Adaptation (BHRA), which partitions each weight matrix and applies HiRA-style multiplicative modulation independently within every block, preserving the PEFT parameter footprint while unlocking localized rank amplification. Our empirical analyses reveal that this blockwise design maintains rich spectra across rank budgets, mitigating the collapse induced by global modulation. Across eight commonsense reasoning tasks and two arithmetic benchmarks with Llama-3.2 1B/3B, Mistral-7B, and Gemma-2 9B, BHRA consistently surpasses strong PEFT baselines under matched parameter budgets.


PRNU-Bench: A Novel Benchmark and Model for PRNU-Based Camera Identification

Croitoru, Florinel Alin, Hondru, Vlad, Ionescu, Radu Tudor

arXiv.org Artificial Intelligence

We propose a novel benchmark for camera identification via Photo Response Non-Uniformity (PRNU) estimation. The benchmark comprises 13K photos taken with 120+ cameras, where the training and test photos are taken in different scenarios, enabling ``in-the-wild'' evaluation. In addition, we propose a novel PRNU-based camera identification model that employs a hybrid architecture, comprising a denoising autoencoder to estimate the PRNU signal and a convolutional network that can perform 1:N verification of camera devices. Instead of using a conventional approach based on contrastive learning, our method takes the Hadamard product between reference and query PRNU signals as input. This novel design leads to significantly better results compared with state-of-the-art models based on denoising autoencoders and contrastive learning. We release our dataset and code at: https://github.com/CroitoruAlin/PRNU-Bench.