Goto

Collaborating Authors

 mcr


8cbe9ce23f42628c98f80fa0fac8b19a-Supplemental.pdf

Neural Information Processing Systems

After training for 200 epochs, we achieve the attack success rate (ASR) of99.97% and the natural accuracy on clean data (ACC)of93.73%. Blend attack [6]: We first generate a trigger pattern where each pixel value is sampled from auniform distribution in[0,255]asshowninFigure 6(c). Input-aware Attack (IAB) [30]: The dynamic trigger varies across samples as shown in Figure 6(d). We apply two types of target label selection. Clean-labelAttack(CLB)[42]: The trigger is a3 3checkerboard at the four corners of images as shown in Figure 7(b).



Theoretical Analysis of Measure Consistency Regularization for Partially Observed Data

Wang, Yinsong, Shahrampour, Shahin

arXiv.org Machine Learning

The problem of corrupted data, missing features, or missing modalities continues to plague the modern machine learning landscape. To address this issue, a class of regularization methods that enforce consistency between imputed and fully observed data has emerged as a promising approach for improving model generalization, particularly in partially observed settings. We refer to this class of methods as Measure Consistency Regularization (MCR). Despite its empirical success in various applications, such as image inpainting, data imputation and semi-supervised learning, a fundamental understanding of the theoretical underpinnings of MCR remains limited. This paper bridges this gap by offering theoretical insights into why, when, and how MCR enhances imputation quality under partial observability, viewed through the lens of neural network distance. Our theoretical analysis identifies the term responsible for MCR's generalization advantage and extends to the imperfect training regime, demonstrating that this advantage is not always guaranteed. Guided by these insights, we propose a novel training protocol that monitors the duality gap to determine an early stopping point that preserves the generalization benefit. We then provide detailed empirical evidence to support our theoretical claims and to show the effectiveness and accuracy of our proposed stopping condition. We further provide a set of real-world data simulations to show the versatility of MCR under different model architectures designed for different data sources.



Supplementary Material: Model Class Reliance for Random Forests

Neural Information Processing Systems

Replication is facilitated through the provision of four hosted Python notebooks which replicate the paper results. When tested hosted runtimes were running Python 3.6.9 The packages developed as part of this work are discussed below and made available via the above notebooks. The code is written as an extension to the sklearn RandomForestRegressor and RandomForestClas-sifer classes. If running the notebooks on a hosted instance this will be automatically installed. The wrapper calls the R code from the lead author's github If running the notebooks on a hosted instance this will be automatically installed.


Efficient Hyperdimensional Computing with Modular Composite Representations

Angioli, Marco, Kymn, Christopher J., Rosato, Antonello, Loutfi, Amy, Olivieri, Mauro, Kleyko, Denis

arXiv.org Artificial Intelligence

Abstract--The modular composite representation (MCR) is a computing model that represents information with high-dimensional integer vectors using modular arithmetic. Ori gi-nally proposed as a generalization of the binary spatter cod e model, it aims to provide higher representational power whi le remaining a lighter alternative to models requiring high-p recision components. However, despite this potential, MCR has recei ved limited attention in the literature. Systematic analyses o f its trade-offs and comparisons with other models, such as binar y spatter codes, multiply-add-permute, and Fourier hologra phic reduced representation, are lacking, sustaining the perce ption that its added complexity outweighs the improved expressiv ity over simpler models. In this work, we revisit MCR by presenti ng its first extensive evaluation, demonstrating that it achie ves a unique balance of information capacity, classification acc uracy, and hardware efficiency. Experiments measuring informatio n capacity demonstrate that MCR outperforms binary and integ er vectors while approaching complex-valued representation s at a fraction of their memory footprint. Evaluation on a collect ion of 123 classification datasets confirms consistent accuracy gains and shows that MCR can match the performance of binary spatter codes using up to 4.0 less memory. We investigate the hardware realization of MCR by showing that it maps naturally to digital logic and by designing the first dedicat ed accelerator for it. Evaluations on basic operations and sev en selected datasets demonstrate a speedup of up to three order s-of-magnitude and significant energy reductions compared to a software implementation. Furthermore, when matched for accuracy against binary spatter codes, MCR achieves on aver age 3.08 faster execution and 2.68 lower energy consumption. The work of CJK was supported by the Center for the Co-Design o f Cognitive Systems (CoCoSys), one of seven centers in JUMP 2.0, a Se miconductor Research Corporation (SRC) program sponsored by DARP A, in a ddition to the NDSEG Fellowship, Fernström Fellowship, Swartz Founda tion, and NSF Grants 2147640 and 2313149. The work of AL and DK was supporte d by Knut and Alice Wallenberg Foundation under the Wallenber g Scholars program (Grant No. KA W2023.0327).


To Reviewer 1

Neural Information Processing Systems

We thank the reviewers for the helpful comments and feedback. Our responses are detailed below. We will make the suggested edits for clarity. The improved interpretability with little loss of accuracy makes the sparse TBM appealing in applications. We agree with reviewer that MSE is not the best metric for clustering.




Aleatoric and Epistemic Uncertainty Measures for Ordinal Classification through Binary Reduction

Haas, Stefan, Hüllermeier, Eyke

arXiv.org Artificial Intelligence

Ordinal classification problems, where labels exhibit a natural order, are prevalent in high-stakes fields such as medicine and finance. Accurate uncertainty quantification, including the decomposition into aleatoric (inherent variability) and epistemic (lack of knowledge) components, is crucial for reliable decision-making. However, existing research has primarily focused on nominal classification and regression. In this paper, we introduce a novel class of measures of aleatoric and epistemic uncertainty in ordinal classification, which is based on a suitable reduction to (entropy- and variance-based) measures for the binary case. These measures effectively capture the trade-off in ordinal classification between exact hit-rate and minimial error distances. We demonstrate the effectiveness of our approach on various tabular ordinal benchmark datasets using ensembles of gradient-boosted trees and multi-layer perceptrons for approximate Bayesian inference. Our method significantly outperforms standard and label-wise entropy and variance-based measures in error detection, as indicated by misclassification rates and mean absolute error. Additionally, the ordinal measures show competitive performance in out-of-distribution (OOD) detection. Our findings highlight the importance of considering the ordinal nature of classification problems when assessing uncertainty.