Goto

Collaborating Authors

 tiny-imagenet








8c64bc3f7796d31caa7c3e6b969bf7da-Paper-Conference.pdf

Neural Information Processing Systems

Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry. Until recently, deep active learning methods were ineffectual inthelow-budgetregime, where only asmall number ofexamples areannotated. Thesituation hasbeen alleviated byrecent advances inrepresentation andself-supervised learning, which impart thegeometry ofthe data representation with rich information about the points.


Appendix A Related Work

Neural Information Processing Systems

For the latter, PT -based methods adaptively extract a matching width-based slimmed-down sub-model from the global model as a local model according to each client's budget, thus averting the requirements for public data. As with FedAvg, PT -based methods require the server to periodically communicate with the clients. Existing PT -based methods focus on how to extract width-based sub-models from the global model. DFKD methods are promising, which transfer knowledge from the teacher model to another student model without any real data. Existing DFKD methods can be broadly classified into non-adversarial and adversarial training methods. They take the quality and/or diversity of the synthetic data as important objectives.