Goto

Collaborating Authors

 dual layer




Exploration and Coverage with Swarms of Settling Agents

Rappel, Ori, Ben-Asher, Joseph, Bruckstein, Alfred

arXiv.org Artificial Intelligence

We consider several algorithms for exploring and filling an unknown, connected region, by simple, airborne agents. The agents are assumed to be identical, autonomous, anonymous and to have a finite amount of memory. The region is modeled as a connected sub-set of a regular grid composed of square cells. The algorithms described herein are suited for Micro Air Vehicles (MAV) since these air vehicles enable unobstructed views of the ground below and can move freely in space at various heights. The agents explore the region by applying various action-rules based on locally acquired information Some of them may settle in unoccupied cells as the exploration progresses. Settled agents become virtual pheromones for the exploration and coverage process, beacons that subsequently aid the remaining, and still exploring, mobile agents. We introduce a backward propagating information diffusion process as a way to implement a deterministic indicator of process termination and guide the mobile agents. For the proposed algorithms, complete covering of the graph in finite time is guaranteed when the size of the region is fixed. Bounds on the coverage times are also derived. Extensive simulation results exhibit good agreement with the theoretical predictions.


Scaling provable adversarial defenses

Wong, Eric, Schmidt, Frank, Metzen, Jan Hendrik, Kolter, J. Zico

Neural Information Processing Systems

Recent work has developed methods for learning deep network classifiers that are \emph{provably} robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three main directly. First, we present a technique for extending these training procedures to much more general networks, with skip connections (such as ResNets) and general nonlinearities; the approach is fully modular, and can be implemented automatically analogously to automatic differentiation. Second, in the specific case of $\ell_\infty$ adversarial perturbations and networks with ReLU nonlinearities, we adopt a nonlinear random projection for training, which scales \emph{linearly} in the number of hidden units (previous approached scaled quadratically). Third, we show how to further improve robust error through cascade models. On both MNIST and CIFAR data sets, we train classifiers that improve substantially on the state of the art in provable robust adversarial error bounds: from 5.8% to 3.1% on MNIST (with $\ell_\infty$ perturbations of $\epsilon=0.1$), and from 80% to 36.4% on CIFAR (with $\ell_\infty$ perturbations of $\epsilon=2/255$).


Scaling provable adversarial defenses

Wong, Eric, Schmidt, Frank, Metzen, Jan Hendrik, Kolter, J. Zico

Neural Information Processing Systems

Recent work has developed methods for learning deep network classifiers that are \emph{provably} robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three main directly. First, we present a technique for extending these training procedures to much more general networks, with skip connections (such as ResNets) and general nonlinearities; the approach is fully modular, and can be implemented automatically analogously to automatic differentiation. Second, in the specific case of $\ell_\infty$ adversarial perturbations and networks with ReLU nonlinearities, we adopt a nonlinear random projection for training, which scales \emph{linearly} in the number of hidden units (previous approached scaled quadratically). Third, we show how to further improve robust error through cascade models. On both MNIST and CIFAR data sets, we train classifiers that improve substantially on the state of the art in provable robust adversarial error bounds: from 5.8% to 3.1% on MNIST (with $\ell_\infty$ perturbations of $\epsilon=0.1$), and from 80% to 36.4% on CIFAR (with $\ell_\infty$ perturbations of $\epsilon=2/255$).


Scaling provable adversarial defenses

Wong, Eric, Schmidt, Frank, Metzen, Jan Hendrik, Kolter, J. Zico

arXiv.org Machine Learning

Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three main directions. First, we present a technique for extending these training procedures to much more general networks, with skip connections (such as ResNets) and general nonlinearities; the approach is fully modular, and can be implemented automatically (analogous to automatic differentiation). Second, in the specific case of $\ell_\infty$ adversarial perturbations and networks with ReLU nonlinearities, we adopt a nonlinear random projection for training, which scales linearly in the number of hidden units (previous approaches scaled quadratically). Third, we show how to further improve robust error through cascade models. On both MNIST and CIFAR data sets, we train classifiers that improve substantially on the state of the art in provable robust adversarial error bounds: from 5.8% to 3.1% on MNIST (with $\ell_\infty$ perturbations of $\epsilon=0.1$), and from 80% to 36.4% on CIFAR (with $\ell_\infty$ perturbations of $\epsilon=2/255$). Code for all experiments in the paper is available at https://github.com/locuslab/convex_adversarial/.