Goto

Collaborating Authors

 robustly





On the Hardness of Robust Classification

Neural Information Processing Systems

It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We show moreover that the class of monotone conjunctions cannot be robustly learned under the uniform distribution against an adversary who can perturb $\omega(\log n)$ input bits. However if the adversary is restricted to perturbing $O(\log n)$ bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution). Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning problem in the PAC framework.




Meta to announce 15bn investment in bid to achieve computerised 'superintelligence'

The Guardian

Meta is to announce a 15bn ( 11bn) bid to achieve computerised "superintelligence", according to multiple reports. The Silicon Valley race to dominate artificial intelligence is speeding up despite the patchy performance of many existing AI systems. Mark Zuckerberg, Meta's chief executive, is expected to announce the company will buy a 49% stake in Scale AI, a startup led by Alexandr Wang and co-founded by Lucy Guo, in a move described by one Silicon Valley analyst as the action of "a wartime CEO". Superintelligence is described as a type of AI that can perform better than humans at all tasks. Currently AI cannot reach the same level as humans in all tasks, a state known as artificial general intelligence (AGI).


Logarithmic Width Suffices for Robust Memorization

Egosi, Amitsour, Yehudai, Gilad, Shamir, Ohad

arXiv.org Machine Learning

The ability of neural networks to memorize labeled datasets is a central question in the study of their expressive power. Given some input domain X, output domain Y, and dataset size N, we say that a network memorizes datasets of size N, if for every labeled dataset D X Y, where |D| = N, we can find parameters such that the resulting network f: X Y perfectly fits the dataset (that is, f(x) = y for every labeled pair (x, y) D). The main question here - which has been studied in many recent works (see Section 2 for details) - is to characterize the size/architecture of the networks that have enough expressive power to memorize any dataset of a given size N. However, merely fitting a given dataset is not enough for most tasks, and a desirable property for trained networks is that they remain robust to noise and minor modifications in the dataset. This robustness property allows neural networks to generalize from observed data points to unseen data points. Furthermore, neural networks have been shown to be vulnerable to adversarial attacks [Szegedy et al., 2013, Carlini and Wagner, 2017, Papernot et al., 2017, Athalye et al., 2018] in the form of slightly perturbed examples, where (in the context of visual data) the perturbation is often imperceptible to the human eye. Moreover, existing constructions of memorizing networks are often quite delicate, and not at all robust to such perturbations. This motivates the question of characterizing the networks that have enough capacity to robustly memorize a dataset.


On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems

Neural Information Processing Systems

Reinforcement learning (RL) algorithms can fail to generalize due to the gap between the simulation and the real world. One standard remedy is to use robust adversarial RL (RARL) that accounts for this gap during the policy training, by modeling the gap as an adversary against the training agent. We first observe that the popular RARL scheme that greedily alternates agents' updates can easily destabilize the system. Motivated by this, we propose several other policy-based RARL algorithms whose convergence behaviors are then studied both empirically and theoretically. We find: i) the conventional RARL framework (Pinto et al., 2017) can learn a destabilizing policy if the initial policy does not enjoy the robust stability property against the adversary; and ii) with robustly stabilizing initializations, our proposed double-loop RARL algorithm provably converges to the global optimal cost while maintaining robust stability on-the-fly.


On the Hardness of Robust Classification

Neural Information Processing Systems

It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit.