Differentially Private Adapters for Parameter Efficient Acoustic Modeling
Ho, Chun-Wei, Yang, Chao-Han Huck, Siniscalchi, Sabato Marco
–arXiv.org Artificial Intelligence
In this work, we devise a parameter-efficient solution to bring differential privacy (DP) guarantees into adaptation of a cross-lingual speech classifier. We investigate a new frozen pre-trained adaptation framework for DP-preserving speech modeling without full model fine-tuning. First, we introduce a noisy teacher-student ensemble into a conventional adaptation scheme leveraging a frozen pre-trained acoustic model and attain superior performance than DP-based stochastic gradient descent (DPSGD). Next, we insert residual adapters (RA) between layers of the frozen pre-trained acoustic model. The RAs reduce training cost and time significantly with a negligible performance drop. Evaluated on the open-access Multilingual Spoken Words (MLSW) dataset, our solution reduces the number of trainable parameters by 97.5% using the RAs with only a 4% performance drop with respect to fine-tuning the cross-lingual speech classifier while preserving DP guarantees.
arXiv.org Artificial Intelligence
May-18-2023
- Country:
- Asia > China
- Shaanxi Province > Xi'an (0.04)
- Europe
- Austria > Styria
- Graz (0.04)
- France (0.04)
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Italy (0.04)
- Norway (0.04)
- Romania > Sud - Muntenia Development Region
- Giurgiu County > Giurgiu (0.04)
- Austria > Styria
- North America > United States (0.28)
- Asia > China
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology > Security & Privacy (0.68)
- Technology: