Review for NeurIPS paper: Privacy Amplification via Random Check-Ins

Neural Information Processing Systems 

Weaknesses: There are five main problems with the proposed scheme: 1) It is not clear what will happen to the privacy and utility guarantees if the participant J_i selected by the server for a particular time slot fails to provide a gradient update within the specified time window? However, just by examining the communication patterns of the server, an adversary may be able infer the subset of participants involved in a specific update. This is especially dangerous in Algorithm 1, where only a single participant's gradient is used to update the model. Thus, the gradient update of that participant may get leaked. Even the guarantee in Theorem 3.5 is only for convex ERM. Does that imply that the proposed algorithm is unlikely to work in typical deep neural network training scenarios?