Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness: Supplementary Material Long Zhao 1 Ting Liu 2 Xi Peng 3
–Neural Information Processing Systems
To bound the deviation of the entropy estimates, we use McDiarmid's inequality [13], in a manner similar to [1]. For this, we must bound the change in value of each of the entropy estimations when a single instance in S is arbitrarily changed. A useful and easily proven inequality in that regard is the following: for any natural m and for any a [0, 1 1/m] and 1/m, |(a +) log(a +) a log(a)| log(m) m. (1) With this in equality, a careful application of McDiarmid's inequality leads to the following lemma. For any δ (0, 1), with probability of at least 1 δ over the sample set, we have that, |Ĥ(T) E[Ĥ(T)]| |T | log(m) log(2/δ) . First, we bound the change caused by a single replacement in Ĥ(T).
Neural Information Processing Systems
Mar-20-2025, 03:15:40 GMT
- Technology: