Adversarially Robust Multi-task Representation Learning
–Neural Information Processing Systems
We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network). In this general setting, we provide rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments. Additionally, we provide novel rates for the single-task setting.
Neural Information Processing Systems
Feb-18-2026, 19:09:32 GMT
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia
- China > Beijing
- Beijing (0.04)
- Middle East > Jordan (0.04)
- China > Beijing
- Europe
- North America
- Canada
- Alberta > Census Division No. 15
- Improvement District No. 9 > Banff (0.04)
- British Columbia > Vancouver (0.04)
- Quebec
- Capitale-Nationale Region
- Quebec City (0.04)
- Québec (0.04)
- Montreal (0.04)
- Capitale-Nationale Region
- Alberta > Census Division No. 15
- United States
- California
- Los Angeles County > Long Beach (0.04)
- Santa Clara County > Palo Alto (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Maryland > Baltimore (0.14)
- Ohio > Franklin County
- Columbus (0.04)
- California
- Canada
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Africa > Ethiopia
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Health & Medicine > Diagnostic Medicine (0.45)
- Information Technology (0.46)
- Technology: