Goto

Collaborating Authors

 Othmani, Alice


Prediction and Detection of Terminal Diseases Using Internet of Medical Things: A Review

arXiv.org Artificial Intelligence

The integration of Artificial Intelligence (AI) and the Internet of Medical Things (IoMT) in healthcare, through Machine Learning (ML) and Deep Learning (DL) techniques, has advanced the prediction and diagnosis of chronic diseases. AI-driven models such as XGBoost, Random Forest, CNNs, and LSTM RNNs have achieved over 98\% accuracy in predicting heart disease, chronic kidney disease (CKD), Alzheimer's disease, and lung cancer, using datasets from platforms like Kaggle, UCI, private institutions, and real-time IoMT sources. However, challenges persist due to variations in data quality, patient demographics, and formats from different hospitals and research sources. The incorporation of IoMT data, which is vast and heterogeneous, adds complexities in ensuring interoperability and security to protect patient privacy. AI models often struggle with overfitting, performing well in controlled environments but less effectively in real-world clinical settings. Moreover, multi-morbidity scenarios especially for rare diseases like dementia, stroke, and cancers remain insufficiently addressed. Future research should focus on data standardization and advanced preprocessing techniques to improve data quality and interoperability. Transfer learning and ensemble methods are crucial for improving model generalizability across clinical settings. Additionally, the exploration of disease interactions and the development of predictive models for chronic illness intersections is needed. Creating standardized frameworks and open-source tools for integrating federated learning, blockchain, and differential privacy into IoMT systems will also ensure robust data privacy and security.


A Novel Stochastic Transformer-based Approach for Post-Traumatic Stress Disorder Detection using Audio Recording of Clinical Interviews

arXiv.org Artificial Intelligence

Post-traumatic stress disorder (PTSD) is a mental disorder that can be developed after witnessing or experiencing extremely traumatic events. PTSD can affect anyone, regardless of ethnicity, or culture. An estimated one in every eleven people will experience PTSD during their lifetime. The Clinician-Administered PTSD Scale (CAPS) and the PTSD Check List for Civilians (PCL-C) interviews are gold standards in the diagnosis of PTSD. These questionnaires can be fooled by the subject's responses. This work proposes a deep learning-based approach that achieves state-of-the-art performances for PTSD detection using audio recordings during clinical interviews. Our approach is based on MFCC low-level features extracted from audio recordings of clinical interviews, followed by deep high-level learning using a Stochastic Transformer. Our proposed approach achieves state-of-the-art performances with an RMSE of 2.92 on the eDAIC dataset thanks to the stochastic depth, stochastic deep learning layers, and stochastic activation function.


Deciphering knee osteoarthritis diagnostic features with explainable artificial intelligence: A systematic review

arXiv.org Artificial Intelligence

Existing artificial intelligence (AI) models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability, despite achieving medical-expert-like performance. This opacity makes them challenging to trust in clinical practice. Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction by revealing how the prediction is derived, thus promoting the use of AI systems in healthcare. This paper presents the first survey of XAI techniques used for knee OA diagnosis. The XAI techniques are discussed from two perspectives: data interpretability and model interpretability. The aim of this paper is to provide valuable insights into XAI's potential towards a more reliable knee OA diagnosis approach and encourage its adoption in clinical practice.


Deep Multi-Facial patches Aggregation Network for Expression Classification from Face Images

arXiv.org Artificial Intelligence

Emotional Intelligence in Human-Computer Interaction has attracted increasing attention from researchers in multidisciplinary research fields including psychology, computer vision, neuroscience, artificial intelligence, and related disciplines. Human prone to naturally interact with computers face-to-face. Human Expressions is an important key to better link human and computers. Thus, designing interfaces able to understand human expressions and emotions can improve Human-Computer Interaction (HCI) for better communication. In this paper, we investigate HCI via a deep multi-facial patches aggregation network for Face Expression Recognition (FER). Deep features are extracted from facial parts and aggregated for expression classification. Several problems may affect the performance of the proposed framework like the small size of FER datasets and the high number of parameters to learn. For That, two data augmentation techniques are proposed for facial expression generation to expand the labeled training. The proposed framework is evaluated on the extended Cohn-Konade dataset (CK+) and promising results are achieved.


MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech

arXiv.org Artificial Intelligence

MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech Emna Rejaibi a,b,c, Ali Komaty d, Fabrice Meriaudeau e, Said Agrebi c, Alice Othmani a a Universit e Paris-Est, LISSI, UPEC, 94400 Vitry sur Seine, France b INSAT Institut National des Sciences Appliqu ees et de T echnologie, Centre Urbain Nord BP 676-1080, Tunis, Tunisie c Y obitrust, T echnopark El Gazala B11 Route de Raoued Km 3.5, 2088 Ariana, Tunisie d University of Sciences and Arts in Lebanon, Ghobeiry, Liban e Universit e de Bourgogne Franche Comt e, ImvIA EA7535/ IFTIM Abstract Major depression, also known as clinical depression, is a constant sense of despair and hopelessness. It is a major mental disorder that can a ff ect people of any age including children and that a ff ect negatively person's personal life, work life, social life and health conditions. Globally, over 300 million people of all ages are estimated to su ff er from clinical depression. A deep recurrent neural network-based framework is presented in this paper to detect depression and to predict its severity level from speech. Low-level and high-level audio features are extracted from audio recordings to predict the 24 scores of the Patient Health Questionnaire (a depression assessment test) and the binary class of depression diagnosis. To overcome the problem of the small size of Speech Depression Recognition (SDR) datasets, data augmentation techniques are used to expand the labeled training set and also transfer learning is performed where the proposed model is trained on a related task and reused as starting point for the proposed model on SDR task. The proposed framework is evaluated on the DAIC-WOZ corpus of the A VEC2017 challenge and promising results are obtained. An overall accuracy of 76.27% with a root mean square error of 0.4 is achieved in assessing depression, while a root mean square error of 0.168 is achieved in predicting the depression severity levels. Introduction Depression is a mental disorder caused by several factors: psychological, social or even physical factors. Psychological factors are related to permanent stress and the inability to successfully cope with di fficult situations. Social factors concern relationship struggles with family or friends and physical factors cover head injuries. Depression describes a loss of interest in every exciting and joyful aspect of everyday life. Mood disorders and mood swings are temporary mental states taking an essential part of daily events, whereas, depression is more permanent and can lead to suicide at its extreme severity levels.