Goto

Collaborating Authors

 dnn model







" Lossless" CompressionofDeepNeuralNetworks: AHigh-dimensionalNeuralTangentKernelApproach

Neural Information Processing Systems

In this paper, building upon recent advances inneural tangent kernel (NTK) andrandom matrix theory (RMT), we provide anovelcompression approach towide and fully-connecteddeepneural nets.


Fairness via Representation Neutralization

Neural Information Processing Systems

Existing bias mitigation methods for DNN models primarily work on learning debiased encoders. This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder. To address these limitations, we explore the following research question: Can we reduce the discrimination of DNN models by only debiasing the classification head, even with biased representations as inputs? To this end, we propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF) that achieves fairness by debiasing only the task-specific classification head of DNN models. To this end, we leverage samples with the same ground-truth label but different sensitive attributes, and use their neutralized representations to train the classification head of the DNN model. The key idea of RNF is to discourage the classification head from capturing spurious correlation between fairness sensitive information in encoder representations with specific class labels. To address low-resource settings with no access to sensitive attribute annotations, we leverage a bias-amplified model to generate proxy annotations for sensitive attributes. Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models with minimal degradation in task-specific performance.



Brains on Beats

Umut Güçlü, Jordy Thielen, Michael Hanke, Marcel van Gerven, Marcel A. J. van Gerven

Neural Information Processing Systems

We developed task-optimized deep neural networks (DNNs) that achieved state-of-the-art performance in different evaluation scenarios for automatic music tagging. These DNNs were subsequently used to probe the neural representations of music. Representational similarity analysis revealed the existence of a representational gradient across the superior temporal gyrus (STG).


Re-optimization of a deep neural network model for electron-carbon scattering using new experimental data

Kowal, Beata E., Graczyk, Krzysztof M., Ankowski, Artur M., Banerjee, Rwik Dharmapal, Bonilla, Jose L., Prasad, Hemant, Sobczyk, Jan T.

arXiv.org Artificial Intelligence

We present an updated deep neural network model for inclusive electron-carbon scattering. Using the bootstrap model [Phys.Rev.C 110 (2024) 2, 025501] as a prior, we incorporate recent experimental data, as well as older measurements in the deep inelastic scattering region, to derive a re-optimized posterior model. We examine the impact of these new inputs on model predictions and associated uncertainties. Finally, we evaluate the resulting cross-section predictions in the kinematic range relevant to the Hyper-Kamiokande and DUNE experiments.