SMS: Self-supervised Model Seeding for Verification of Machine Unlearning
Wang, Weiqi, Zhang, Chenhan, Tian, Zhiyi, Yu, Shui
–arXiv.org Artificial Intelligence
Abstract--Many machine unlearning methods have been proposed recently to uphold users' right to be forgotten. However, offering users verification of their data removal post-unlearning is an important yet under-explored problem. Current verifications typically rely on backdooring, i.e., adding backdoored samples to influence model performance. Nevertheless, the backdoor methods can merely establish a connection between backdoored samples and models but fail to connect the backdoor with genuine samples. Thus, the backdoor removal can only confirm the unlearning of backdoored samples, not users' genuine samples, as genuine samples are independent of backdoored ones. In this paper, we propose a Self-supervised Model Seeding (SMS) scheme to provide unlearning verification for genuine samples. Unlike backdooring, SMS links user-specific seeds (such as users' unique indices), original samples, and models, thereby facilitating the verification of unlearning genuine samples. However, implementing SMS for unlearning verification presents two significant challenges. First, embedding the seeds into the service model while keeping them secret from the server requires a sophisticated approach. We address this by employing a self-supervised model seeding task, which learns the entire sample, including the seeds, into the model's latent space. Second, maintaining the utility of the original service model while ensuring the seeding effect requires a delicate balance. The effectiveness of the proposed SMS scheme is evaluated through extensive experiments on three representative datasets, utilizing various model architectures and exact and approximate unlearning benchmarks. The results demonstrate that SMS provides effective verification for genuine sample unlearning, effectively addressing the limitations of existing solutions. N recent years, numerous privacy regulations and laws, such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCP A) [1], have been introduced to safeguard individuals' data privacy. These legislations guarantee individuals the right to be forgotten, thus prompting a hot and attractive research topic, machine unlearning [2, 3, 4]. Machine unlearning aims to remove the trace of user-specified samples from the already-trained models, ensuring compliance with these privacy mandates.
arXiv.org Artificial Intelligence
Oct-1-2025
- Country:
- Asia
- China > Hong Kong (0.04)
- Japan > Kyūshū & Okinawa
- Kyūshū > Nagasaki Prefecture > Nagasaki (0.04)
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > Alberta
- United States > California (0.24)
- Oceania > Australia
- New South Wales
- Sydney (0.05)
- Wollongong (0.04)
- New South Wales
- Asia
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology: