Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers Guang-He Lee 1, Y ang Yuan
–Neural Information Processing Systems
Many powerful classifiers lack robustness in the sense that a slight, potentially unnoticeable manipulation of the input features, e.g., by an adversary, can cause the classifier to change its prediction [
Neural Information Processing Systems
Aug-20-2025, 10:19:04 GMT
- Country:
- North America
- Canada (0.04)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- North America
- Industry:
- Information Technology > Security & Privacy (0.68)
- Technology: