zhangetal
fdb55ce855129e05da8374059cc82728-Supplemental.pdf
A.1 Fullexperimentalresults In this section we provide the full experimental results that extend the results demonstrated in the Section 4.2. Table 8 demonstrates the evaluation on 16 robustly trained CIFAR10 models from RobustBench [28] that was summarized in the Table 2. We consider four configurations of the attack for each of the models. SA and AA correspond to the update size schedules proposed by Andriushchenko et al.[1]and Croce and Hein[2]respectively. "Uni" denotes sampling the color fortheupdateuniformly. A.2 Meta-trainingtheControllers The meta-training of controllers was described in Section 3 and Section 4.1.
246a3c5544feb054f3ea718f61adfa16-Paper.pdf
Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories:exact verifiers that run in exponential time andrelaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework.
- North America > United States (0.04)
- Europe > Switzerland (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
TrashorTreasure?AnInteractiveDual-Stream StrategyforSingleImageReflectionSeparation
Existing deep learning based solutions typically restore the target layers individually, or with some concerns at the end of the output, barely taking into account the interaction across thetwostreams/branches. Inorder toutilize information more efficiently, this work presents a general yet simple interactive strategy, namely your trash is my treasure(YTMT), for constructing dual-stream decomposition networks.
- Asia > China (0.05)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- North America > United States > Washington > King County > Bellevue (0.04)
- North America > United States > Virginia > Fairfax County > Fairfax (0.04)
- (2 more...)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
DISCO: AdversarialDefensewith LocalImplicitFunctions
In this section, we ablate the kernel size used to train DISCO on ImageNet. TableIshowsthats=3 achieves the best performance, which degrades fors = 5 by a significant margin (3.26%). This is consistent with the well known complexity of synthesizing images withglobalmodels, suchasGANs. For a single ImageNet image of size 224, STL requires 23.71 seconds while DISCO (K=1) only requires0.027. In this section, we list the url links that are used for training and evaluating DISCO.
- North America > United States (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Russia (0.14)
- North America > United States (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Middle East > Jordan (0.04)