Lightweight Transformers for Clinical Natural Language Processing

Rohanian, Omid, Nouriborji, Mohammadmahdi, Jauncey, Hannah, Kouchaki, Samaneh, Group, ISARIC Clinical Characterisation, Clifton, Lei, Merson, Laura, Clifton, David A.

arXiv.org Artificial Intelligence 

Specialised pre-trained language models are becoming more frequent in NLP since they can potentially outperform models trained on generic texts. BioBERT (Sanh et al., 2019) and BioClinicalBERT (Alsentzer et al., 2019) are two examples of such models that have shown promise in medical NLP tasks. Many of these models are overparametrised and resource-intensive, but thanks to techniques like Knowledge Distillation (KD), it is possible to create smaller versions that perform almost as well as their larger counterparts. In this work, we specifically focus on development of compact language models for processing clinical texts (i.e. We developed a number of efficient lightweight clinical transformers using knowledge distillation and continual learning, with the number of parameters ranging from 15 million to 65 million. These models performed comparably to larger models such as BioBERT and ClinicalBioBERT and significantly outperformed other compact models trained on general or biomedical data. Our extensive evaluation was done across several standard datasets and covered a wide range of clinical text-mining tasks, including Natural Language Inference, Relation Extraction, Named Entity Recognition, and Sequence Classification. To our knowledge, this is the first comprehensive study specifically focused on creating efficient and compact transformers for clinical NLP tasks. The models and code used in this study can be found on our Huggingface profile at https: //huggingface.co/nlpie and Github page at https://github.com/ Large language models pre-trained on generic texts serve as the foundation upon which most stateof-the-art NLP models are built. There is ample evidence that, for certain domains and downstream tasks, models that are pre-trained on specialised data outperform baselines that have only relied on generic texts (Sanh et al., 2019; Alsentzer et al., 2019; Beltagy et al., 2019; Nguyen et al., 2020; Chalkidis et al., 2020).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found