Goto

Collaborating Authors

 Meriaudeau, Fabrice


Automatic quantification of breast cancer biomarkers from multiple 18F-FDG PET image segmentation

arXiv.org Artificial Intelligence

Neoadjuvant chemotherapy (NAC) has become a standard clinical practice for tumor downsizing in breast cancer with 18F-FDG Positron Emission Tomography (PET). Our work aims to leverage PET imaging for the segmentation of breast lesions. The focus is on developing an automated system that accurately segments primary tumor regions and extracts key biomarkers from these areas to provide insights into the evolution of breast cancer following the first course of NAC. 243 baseline 18F-FDG PET scans (PET_Bl) and 180 follow-up 18F-FDG PET scans (PET_Fu) were acquired before and after the first course of NAC, respectively. Firstly, a deep learning-based breast tumor segmentation method was developed. The optimal baseline model (model trained on baseline exams) was fine-tuned on 15 follow-up exams and adapted using active learning to segment tumor areas in PET_Fu. The pipeline computes biomarkers such as maximum standardized uptake value (SUVmax), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) to evaluate tumor evolution between PET_Fu and PET_Bl. Quality control measures were employed to exclude aberrant outliers. The nnUNet deep learning model outperformed in tumor segmentation on PET_Bl, achieved a Dice similarity coefficient (DSC) of 0.89 and a Hausdorff distance (HD) of 3.52 mm. After fine-tuning, the model demonstrated a DSC of 0.78 and a HD of 4.95 mm on PET_Fu exams. Biomarkers analysis revealed very strong correlations whatever the biomarker between manually segmented and automatically predicted regions. The significant average decrease of SUVmax, MTV and TLG were 5.22, 11.79 cm3 and 19.23 cm3, respectively. The presented approach demonstrates an automated system for breast tumor segmentation from 18F-FDG PET. Thanks to the extracted biomarkers, our method enables the automatic assessment of cancer progression.


Placental Vessel Segmentation and Registration in Fetoscopy: Literature Review and MICCAI FetReg2021 Challenge Findings

arXiv.org Artificial Intelligence

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to regulate blood exchange among twins. The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation. Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision challenge, we released the first largescale multicentre TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. The challenge provided an opportunity for creating generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-centre fetoscopic data, we provide a benchmark for future research in this field.


Road scenes analysis in adverse weather conditions by polarization-encoded images and adapted deep learning

arXiv.org Machine Learning

Road scenes analysis in adverse weather conditions by polarization-encoded images and adapted deep learning Rachel Blin 1, Samia Ainouz 1, St ephane Canu 1 and Fabrice Meriaudeau 2 Abstract -- Object detection in road scenes is necessary to develop both autonomous vehicles and driving assistance systems. Even if deep neural networks for recognition task have shown great performances using conventional images, they fail to detect objects in road scenes in complex acquisition situations. In contrast, polarization images, characterizing the light wave, can robustly describe important physical properties of the object even under poor illumination or strong reflections. This paper shows how non-conventional polarimetric imaging modality overcomes the classical methods for object detection especially in adverse weather conditions. The efficiency of the proposed method is mostly due to the high power of the polarimetry to discriminate any object by its reflective properties and on the use of deep neural networks for object detection. Our goal by this work, is to prove that polarimetry brings a real added value compared with RGB images for object detection. Experimental results on our own dataset composed of road scene images taken during adverse weather conditions show that polarimetry together with deep learning can improve the state-of-the-art by about 20% to 50% on different detection tasks.


MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech

arXiv.org Artificial Intelligence

MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech Emna Rejaibi a,b,c, Ali Komaty d, Fabrice Meriaudeau e, Said Agrebi c, Alice Othmani a a Universit e Paris-Est, LISSI, UPEC, 94400 Vitry sur Seine, France b INSAT Institut National des Sciences Appliqu ees et de T echnologie, Centre Urbain Nord BP 676-1080, Tunis, Tunisie c Y obitrust, T echnopark El Gazala B11 Route de Raoued Km 3.5, 2088 Ariana, Tunisie d University of Sciences and Arts in Lebanon, Ghobeiry, Liban e Universit e de Bourgogne Franche Comt e, ImvIA EA7535/ IFTIM Abstract Major depression, also known as clinical depression, is a constant sense of despair and hopelessness. It is a major mental disorder that can a ff ect people of any age including children and that a ff ect negatively person's personal life, work life, social life and health conditions. Globally, over 300 million people of all ages are estimated to su ff er from clinical depression. A deep recurrent neural network-based framework is presented in this paper to detect depression and to predict its severity level from speech. Low-level and high-level audio features are extracted from audio recordings to predict the 24 scores of the Patient Health Questionnaire (a depression assessment test) and the binary class of depression diagnosis. To overcome the problem of the small size of Speech Depression Recognition (SDR) datasets, data augmentation techniques are used to expand the labeled training set and also transfer learning is performed where the proposed model is trained on a related task and reused as starting point for the proposed model on SDR task. The proposed framework is evaluated on the DAIC-WOZ corpus of the A VEC2017 challenge and promising results are obtained. An overall accuracy of 76.27% with a root mean square error of 0.4 is achieved in assessing depression, while a root mean square error of 0.168 is achieved in predicting the depression severity levels. Introduction Depression is a mental disorder caused by several factors: psychological, social or even physical factors. Psychological factors are related to permanent stress and the inability to successfully cope with di fficult situations. Social factors concern relationship struggles with family or friends and physical factors cover head injuries. Depression describes a loss of interest in every exciting and joyful aspect of everyday life. Mood disorders and mood swings are temporary mental states taking an essential part of daily events, whereas, depression is more permanent and can lead to suicide at its extreme severity levels.