Review for NeurIPS paper: Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Neural Information Processing Systems 

Weaknesses: Major Concerns 1. D_edge - I have some concerns w.r.t the backdoors injected via D_edge (i.e., data from tail end of some distribution). But, wouldn't essentially any set of data outside from MNIST display similar statistics (e.g., CIFAR, EMNIST) -- possible even adversarially crafted data? But more generally, I find constructing the p-edge-case dataset in the paper loosely defined. Because almost always in the paper, D' is defined as D \cap D_edge. I was expecting it to be mixed only with a particular data partition D_i.