Goto

Collaborating Authors

 Thiessard, Frantz


Pre-Training A Neural Language Model Improves the Sample Efficiency of an Emergency Room Classification Model

AAAI Conferences

To build a French national electronic injury surveillance system based on emergency room visits, we aim to develop a coding system to classify their causes from clinical notes in free-text. Supervised learning techniques have shown good results in this area but require a large amount of expert annotated dataset which is time consuming and costly to obtain. We hypothesize that the Natural Language Processing Transformer model incorporating a generative self-supervised pre-training step can significantly reduce the required number of annotated samples for supervised fine-tuning. In this preliminary study, we test our hypothesis in the simplified problem of predicting whether a visit is the consequence of a traumatic event or not from free-text clinical notes. Using fully re-trained GPT-2 models (without OpenAI pre-trained weights), we assess the gain of applying a self-supervised pre-training phase with unlabeled notes prior to the supervised learning task. Results show that the number of data required to achieve a ginve level of performance (AUC>0.95) was reduced by a factor of 10 when applying pre-training. Namely, for 16 times more data, the fully-supervised model achieved an improvement <1% in AUC. To conclude, it is possible to adapt a multi-purpose neural language model such as the GPT-2 to create a powerful tool for classification of free-text notes with only a small number of labeled samples.


Automatic detection of surgical site infections from a clinical data warehouse

arXiv.org Machine Learning

Reducing the incidence of surgical site infections (SSIs) is one of the objectives of the French nosocomial infection control program. Manual monitoring of SSIs is carried out each year by the hospital hygiene team and surgeons at the University Hospital of Bordeaux. Our goal was to develop an automatic detection algorithm based on hospital information system data. Three years (2015, 2016 and 2017) of manual spine surgery monitoring have been used as a gold standard to extract features and train machine learning algorithms. The dataset contained 22 SSIs out of 2133 spine surgeries. Two different approaches were compared. The first used several data sources and achieved the best performance but is difficult to generalize to other institutions. The second was based on free text only with semiautomatic extraction of discriminant terms. The algorithms managed to identify all the SSIs with 20 and 26 false positives respectively on the dataset. Another evaluation is underway. These results are encouraging for the development of semi-automated surveillance methods.


Neural Language Model for Automated Classification of Electronic Medical Records at the Emergency Room. The Significant Benefit of Unsupervised Generative Pre-training

arXiv.org Artificial Intelligence

In order to build a national injury surveillance system based on emergency room (ER) visits we are developing a coding system to classify their causes from clinical notes content. Supervised learning techniques have shown good results in this area but require to manually build a large learning annotated dataset. New levels of performance have been recently achieved in neural language models (NLM) with the use of models based on the Transformer architecture with an unsupervised generative pre-training step. Our hypothesis is that methods involving a generative self-supervised pre-training step significantly reduce the number of annotated samples required for supervised fine-tuning. In this case study, we assessed whether we could predict from free text clinical notes whether a visit was the consequence of a traumatic or a non-traumatic event. We compared two strategies: Strategy A consisted in training the GPT-2 NLM on the full 161 930 samples dataset with all labels (trauma/non-trauma). In Strategy B, we split the training dataset in two parts, a large one of 151 930 samples without any label for the self-supervised pre-training phase and a smaller one (up to 10 000 samples) for the supervised fine-tuning with labels. While strategy A needed to process 40 000 samples to achieve good performance (AUC>0.95), strategy B needed only 500 samples, a gain of 80. Moreover, an AUC of 0.93 was measured with only 30 labeled samples processed 3 times (3 epochs). To conclude, it is possible to adapt a multi-purpose NLM model such as the GPT-2 to create a powerful tool for classification of free-text notes with the need of a very small number of labeled samples. Only two modalities (trauma/non-trauma) were predicted for this case study but the same method can be applied for multimodal classification tasks such as diagnosis/disease terminologies.