AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation

Park, Sangjoon, Kim, Gwanghyun, Oh, Yujin, Seo, Joon Beom, Lee, Sang Min, Kim, Jin Hwan, Moon, Sungjun, Lim, Jae-Kwang, Park, Chang Min, Ye, Jong Chul

arXiv.org Artificial Intelligence 

These deep learning-based AI models have demonstrated the potential to dramatically reduce the workload of clinicians in a variety of contexts if used as an assistant, leveraging their power to handle a large corpus of data in parallel. The advantage can be maximized in resource-limited settings such as underdeveloped countries where various diseases such as tuberculosis prevail while the experts to provide the accurate diagnosis are scanty. Most of the existing AI tools are based on the convolutional neural network (CNN) models built with supervised learning, but collecting large and well-curated data with the ground truth annotation is rather difficult in the underprivileged areas where the amount of available data itself is abundant. In particular, although the size of data increases in number every year in these areas, the lack of ground truth annotation hinders the use of increasing number of data to improve the performance of AI models. Given the limitation in label availability, an important line of machine learning research is self-supervised and semi-supervised learning, which relies less on the corpus of labeled data. In general, the orthodoxy was that a model trained with a supervised learning approach is the upper bound of the performance. However, it was recently shown that the self-training with knowledge distillation between the teacher and noisy student, a type of semi-supervised learning approach, can substantially improve the robustness of the model to adversarial perturbations.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found