Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks Zhaohan Xi

Neural Information Processing Systems 

In this work, we conduct a pilot study showing that PLMs as few-shot learners are highly vulnerable to backdoor attacks while existing defenses are inadequate due to the unique challenges of few-shot scenarios.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found