Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization
Li, Yuhang, Dong, Xin, Chen, Chen, Li, Jingtao, Wen, Yuxin, Spranger, Michael, Lyu, Lingjuan
–arXiv.org Artificial Intelligence
Synthetic image data generation represents a promising avenue for training deep learning models, particularly in the realm of transfer learning, where obtaining real images within a specific domain can be prohibitively expensive due to privacy and intellectual property considerations. This work delves into the generation and utilization of synthetic images derived from text-to-image generative models in facilitating transfer learning paradigms. Despite the high visual fidelity of the generated images, we observe that their naive incorporation into existing real-image datasets does not consistently enhance model performance due to the inherent distribution gap between synthetic and real images. To address this issue, we introduce a novel two-stage framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability and subsequently uses real data for rapid adaptation. Alongside, We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images. Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements, with up to 30% accuracy increase on classification tasks. Intriguingly, we note that the enhancements were not yet saturated, indicating that the benefits may further increase with an expanded volume of synthetic data. Pre-training a model on a large-scale dataset and subsequently transferring it to downstream tasks has proven to be both a practical and effective approach to achieving exceptional performance across a variety of tasks (Sharif Razavian et al., 2014; Donahue et al., 2014). In the transfer learning pipeline, a model is initially trained on a source dataset and later fine-tuned on various downstream datasets (aka target datasets). Source datasets are typically large-scale, general-purpose, and publicly available, such as ImageNet-1K/21K (Deng et al., 2009).
arXiv.org Artificial Intelligence
Apr-2-2024