Goto

Collaborating Authors

 tablea


SupplementaryMaterialforAdversarialRobustness with Non-uniformPerturbations

Neural Information Processing Systems

Consider the 2D toy example of binary classification in Figure A.1 which is obtainedbymodifying[1]. Both relationships are intuitive, and both would be broken by applying uniform perturbations. Activation Bounds: The dual objective function provides a bound on any linear functioncTˆzk. Therefore, we can compute the dual objective forc = I and c = I to obtain lower and upper bounds. Byte addition isstopped when theprediction score getslowerthan a threshold value or the file size exceeds 5MB. This attack isapplied to2000 binaries from EMBER malicious test set for constants "169" and "0", and we call these adversarial example sets C1 Pad. and C2 Pad., respectively.


DSPNHDPro Rand 0.446 (2E-3) 0.656 (8E-3) 0.170 (1E-2) 0.011 (8E-3)

Neural Information Processing Systems

Aslong asthesample solutions areofhigh quality,theyaresufficient toguide themodel todiscriminate21 between high-and low-quality solutions, which is evidenced by our experiments where the sample solutions are22 approximations.


GraphLearningAssistedMulti-objectiveInteger Programming(Appendix) A.1 Searchregionupdate

Neural Information Processing Systems

Ontheother hand, the reference set could also be a (good) approximated Pareto front for assessment. In this paper,we use the exact Pareto front for MOKP(3-100) tocompute IGD since theyare easy tobe solvedoptimally.




SloMo-Fast: Slow-Momentum and Fast-Adaptive Teachers for Source-Free Continual Test-Time Adaptation

Iftee, Md Akil Raihan, Hossain, Mir Sazzat, Rajib, Rakibul Hasan, Iqbal, Tariq, Islam, Md Mofijul, Amin, M Ashraful, Ali, Amin Ahsan, Rahman, AKM Mahbubur

arXiv.org Artificial Intelligence

Continual Test-Time Adaptation (CTTA) is crucial for deploying models in real-world applications with unseen, evolving target domains. Existing CTTA methods, however, often rely on source data or prototypes, limiting their applicability in privacy-sensitive and resource-constrained settings. Additionally, these methods suffer from long-term forgetting, which degrades performance on previously encountered domains as target domains shift. To address these challenges, we propose SloMo-Fast, a source-free, dual-teacher CTTA framework designed for enhanced adaptability and generalization. It includes two complementary teachers: the Slow-Teacher, which exhibits slow forgetting and retains long-term knowledge of previously encountered domains to ensure robust generalization, and the Fast-Teacher rapidly adapts to new domains while accumulating and integrating knowledge across them. This framework preserves knowledge of past domains and adapts efficiently to new ones. We also introduce Cyclic Test-Time Adaptation (Cyclic-TTA), a novel CTTA benchmark that simulates recurring domain shifts. Our extensive experiments demonstrate that SloMo-Fast consistently outperforms state-of-the-art methods across Cyclic-TTA, as well as ten other CTTA settings, highlighting its ability to both adapt and generalize across evolving and revisited domains.