Goto

Collaborating Authors

 backdoor





Appendix A Code Base

Neural Information Processing Systems

We also define the clean reversed conditional transition as Eq. Thus, a( t) and b (t) can be derived as Eq. The KL-divergence loss of the reversed transition can be simplified as Eq. Thus, we can finally write down the clean loss function Eq. (9) with reparametrization This section will further extend the derivation of the clean diffusion models in Appendix B.1 and Recall the definition of the backdoor reversed conditional transition in Eq. (10). We mark the coefficients of the r as red.







Ferrari: FederatedFeatureUnlearningvia OptimizingFeatureSensitivity

Neural Information Processing Systems

Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients,if not all, in the unlearning process. Furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. Toaddress these limitations, we define feature sensitivity in evaluating feature unlearning according to Lipschitz continuity. Thismetric characterizes themodel output'srateofchange or sensitivity to perturbations in the input feature. We then propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity. Extensive experimental results and theoretical analysis demonstrate the effectiveness of Ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features.