Goto

Collaborating Authors

 tablea


SupplementaryMaterialforAdversarialRobustness with Non-uniformPerturbations

Neural Information Processing Systems

Consider the 2D toy example of binary classification in Figure A.1 which is obtainedbymodifying[1]. Both relationships are intuitive, and both would be broken by applying uniform perturbations. Activation Bounds: The dual objective function provides a bound on any linear functioncTˆzk. Therefore, we can compute the dual objective forc = I and c = I to obtain lower and upper bounds. Byte addition isstopped when theprediction score getslowerthan a threshold value or the file size exceeds 5MB. This attack isapplied to2000 binaries from EMBER malicious test set for constants "169" and "0", and we call these adversarial example sets C1 Pad. and C2 Pad., respectively.


DSPNHDPro Rand 0.446 (2E-3) 0.656 (8E-3) 0.170 (1E-2) 0.011 (8E-3)

Neural Information Processing Systems

Aslong asthesample solutions areofhigh quality,theyaresufficient toguide themodel todiscriminate21 between high-and low-quality solutions, which is evidenced by our experiments where the sample solutions are22 approximations.


GraphLearningAssistedMulti-objectiveInteger Programming(Appendix) A.1 Searchregionupdate

Neural Information Processing Systems

Ontheother hand, the reference set could also be a (good) approximated Pareto front for assessment. In this paper,we use the exact Pareto front for MOKP(3-100) tocompute IGD since theyare easy tobe solvedoptimally.