Goto

Collaborating Authors

 Rehman, Abdul


Systematic Literature Review on Clinical Trial Eligibility Matching

arXiv.org Artificial Intelligence

--Clinical trial eligibility matching is a critical yet often labor-intensive and error-prone step in medical research, as it ensures that participants meet precise criteria for safe and reliable study outcomes. Recent advances in Natural Language Processing (NLP) have shown promise in automating and improving this process by rapidly analyzing large volumes of unstructured clinical text and structured electronic health record (EHR) data. In this paper, we present a systematic overview of current NLP methodologies applied to clinical trial eligibility screening, focusing on data sources, annotation practices, machine learning approaches, and real-world implementation challenges. A comprehensive literature search (spanning Google Scholar, Mendeley, and PubMed from 2015 to 2024) yielded high-quality studies, each demonstrating the potential of techniques such as rule-based systems, named entity recognition, contextual embeddings, and ontology-based normalization to enhance patient matching accuracy. While results indicate substantial improvements in screening efficiency and precision, limitations persist regarding data completeness, annotation consistency, and model scalability across diverse clinical domains. The review highlights how explainable AI and standardized ontologies can bolster clinician trust and broaden adoption. Looking ahead, further research into advanced semantic and temporal representations, expanded data integration, and rigorous prospective evaluations is necessary to fully realize the transformative potential of NLP in clinical trial recruitment. I NTRODUCTION Clinical trials are vital not only to the progress of medical science but to ensure new treatment products or interventions are effective as well as safe. Clinical trial eligibility matching begins when identifying potential participants based on inclusion and exclusion criteria devised for the study.


SSRepL-ADHD: Adaptive Complex Representation Learning Framework for ADHD Detection from Visual Attention Tasks

arXiv.org Artificial Intelligence

Self Supervised Representation Learning (SSRepL) can capture meaningful and robust representations of the Attention Deficit Hyperactivity Disorder (ADHD) data and have the potential to improve the model's performance on also downstream different types of Neurodevelopmental disorder (NDD) detection. In this paper, a novel SSRepL and Transfer Learning (TL)-based framework that incorporates a Long Short-Term Memory (LSTM) and a Gated Recurrent Units (GRU) model is proposed to detect children with potential symptoms of ADHD. This model uses Electroencephalogram (EEG) signals extracted during visual attention tasks to accurately detect ADHD by preprocessing EEG signal quality through normalization, filtering, and data balancing. For the experimental analysis, we use three different models: 1) SSRepL and TL-based LSTM-GRU model named as SSRepL-ADHD, which integrates LSTM and GRU layers to capture temporal dependencies in the data, 2) lightweight SSRepL-based DNN model (LSSRepL-DNN), and 3) Random Forest (RF). In the study, these models are thoroughly evaluated using well-known performance metrics (i.e., accuracy, precision, recall, and F1-score). The results show that the proposed SSRepL-ADHD model achieves the maximum accuracy of 81.11% while admitting the difficulties associated with dataset imbalance and feature selection.


DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

arXiv.org Artificial Intelligence

The creation of large, diverse, high-quality robot manipulation datasets is an important stepping stone on the path toward more capable and robust robotic manipulation policies. However, creating such datasets is challenging: collecting robot manipulation data in diverse environments poses logistical and safety challenges and requires substantial investments in hardware and human labour. As a result, even the most general robot manipulation policies today are mostly trained on data collected in a small number of environments with limited scene and task diversity. In this work, we introduce DROID (Distributed Robot Interaction Dataset), a diverse robot manipulation dataset with 76k demonstration trajectories or 350 hours of interaction data, collected across 564 scenes and 84 tasks by 50 data collectors in North America, Asia, and Europe over the course of 12 months. We demonstrate that training with DROID leads to policies with higher performance and improved generalization ability. We open source the full dataset, policy learning code, and a detailed guide for reproducing our robot hardware setup.


Classifying Text-Based Conspiracy Tweets related to COVID-19 using Contextualized Word Embeddings

arXiv.org Artificial Intelligence

The FakeNews task in MediaEval 2022 investigates the challenge of finding accurate and high-performance models for the classification of conspiracy tweets related to COVID-19. In this paper, we used BERT, ELMO, and their combination for feature extraction and RandomForest as classifier. The results show that ELMO performs slightly better than BERT, however their combination at feature level reduces the performance.


Real-time Speech Emotion Recognition Based on Syllable-Level Feature Extraction

arXiv.org Artificial Intelligence

Speech emotion recognition systems have high prediction latency because of the high computational requirements for deep learning models and low generalizability mainly because of the poor reliability of emotional measurements across multiple corpora. To solve these problems, we present a speech emotion recognition system based on a reductionist approach of decomposing and analyzing syllable-level features. Mel-spectrogram of an audio stream is decomposed into syllable-level components, which are then analyzed to extract statistical features. The proposed method uses formant attention, noise-gate filtering, and rolling normalization contexts to increase feature processing speed and tolerance to adversity. A set of syllable-level formant features is extracted and fed into a single hidden layer neural network that makes predictions for each syllable as opposed to the conventional approach of using a sophisticated deep learner to make sentence-wide predictions. The syllable level predictions help to achieve the real-time latency and lower the aggregated error in utterance level cross-corpus predictions. The experiments on IEMOCAP (IE), MSP-Improv (MI), and RAVDESS (RA) databases show that the method archives real-time latency while predicting with state-of-the-art cross-corpus unweighted accuracy of 47.6% for IE to MI and 56.2% for MI to IE.