Azghadi, Mostafa Rahimi
NeuroMorse: A Temporally Structured Dataset For Neuromorphic Computing
Walters, Ben, Bethi, Yeshwanth, Kergan, Taylor, Nguyen, Binh, Amirsoleimani, Amirali, Eshraghian, Jason K., Afshar, Saeed, Azghadi, Mostafa Rahimi
Neuromorphic engineering aims to advance computing by mimicking the brain's efficient processing, where data is encoded as asynchronous temporal events. This eliminates the need for a synchronisation clock and minimises power consumption when no data is present. However, many benchmarks for neuromorphic algorithms primarily focus on spatial features, neglecting the temporal dynamics that are inherent to most sequence-based tasks. This gap may lead to evaluations that fail to fully capture the unique strengths and characteristics of neuromorphic systems. In this paper, we present NeuroMorse, a temporally structured dataset designed for benchmarking neuromorphic learning systems. NeuroMorse converts the top 50 words in the English language into temporal Morse code spike sequences. Despite using only two input spike channels for Morse dots and dashes, complex information is encoded through temporal patterns in the data. The proposed benchmark contains feature hierarchy at multiple temporal scales that test the capacity of neuromorphic algorithms to decompose input patterns into spatial and temporal hierarchies. We demonstrate that our training set is challenging to categorise using a linear classifier and that identifying keywords in the test set is difficult using conventional methods.
Stabilizing Machine Learning for Reproducible and Explainable Results: A Novel Validation Approach to Subject-Specific Insights
Vos, Gideon, van Eijk, Liza, Sarnyai, Zoltan, Azghadi, Mostafa Rahimi
Machine Learning is transforming medical research by improving diagnostic accuracy and personalizing treatments. General ML models trained on large datasets identify broad patterns across populations, but their effectiveness is often limited by the diversity of human biology. This has led to interest in subject-specific models that use individual data for more precise predictions. However, these models are costly and challenging to develop. To address this, we propose a novel validation approach that uses a general ML model to ensure reproducible performance and robust feature importance analysis at both group and subject-specific levels. We tested a single Random Forest (RF) model on nine datasets varying in domain, sample size, and demographics. Different validation techniques were applied to evaluate accuracy and feature importance consistency. To introduce variability, we performed up to 400 trials per subject, randomly seeding the ML algorithm for each trial. This generated 400 feature sets per subject, from which we identified top subject-specific features. A group-specific feature importance set was then derived from all subject-specific results. We compared our approach to conventional validation methods in terms of performance and feature importance consistency. Our repeated trials approach, with random seed variation, consistently identified key features at the subject level and improved group-level feature importance analysis using a single general model. Subject-specific models address biological variability but are resource-intensive. Our novel validation technique provides consistent feature importance and improved accuracy within a general ML model, offering a practical and explainable alternative for clinical research.
Machine Learning for Asymptomatic Ratoon Stunting Disease Detection With Freely Available Satellite Based Multispectral Imaging
Waters, Ethan Kane, Chen, Carla Chia-ming, Azghadi, Mostafa Rahimi
Disease detection in sugarcane, particularly the identification of asymptomatic infectious diseases such as Ratoon Stunting Disease (RSD), is critical for effective crop management. This study employed various machine learning techniques to detect the presence of RSD in different sugarcane varieties, using vegetation indices derived from freely available satellite-based spectral data. Our results show that the Support Vector Machine with a Radial Basis Function Kernel (SVM-RBF) was the most effective algorithm, achieving classification accuracy between 85.64% and 96.55%, depending on the variety. Gradient Boosting and Random Forest also demonstrated high performance achieving accuracy between 83.33% to 96.55%, while Logistic Regression and Quadratic Discriminant Analysis showed variable results across different varieties. The inclusion of sugarcane variety and vegetation indices was important in the detection of RSD. This agreed with what was identified in the current literature. Our study highlights the potential of satellite-based remote sensing as a cost-effective and efficient method for large-scale sugarcane disease detection alternative to traditional manual laboratory testing methods.
The Effect of Acute Stress on the Interpretability and Generalization of Schizophrenia Predictive Machine Learning Models
Vos, Gideon, Ebrahimpour, Maryam, van Eijk, Liza, Sarnyai, Zoltan, Azghadi, Mostafa Rahimi
Introduction Schizophrenia is a severe mental disorder, and early diagnosis is key to improving outcomes. Its complexity makes predicting onset and progression challenging. EEG has emerged as a valuable tool for studying schizophrenia, with machine learning increasingly applied for diagnosis. This paper assesses the accuracy of ML models for predicting schizophrenia and examines the impact of stress during EEG recording on model performance. We integrate acute stress prediction into the analysis, showing that overlapping conditions like stress during recording can negatively affect model accuracy. Methods Four XGBoost models were built: one for stress prediction, two to classify schizophrenia (at rest and task), and a model to predict schizophrenia for both conditions. XAI techniques were applied to analyze results. Experiments tested the generalization of schizophrenia models using their datasets' healthy controls and independent health-screened controls. The stress model identified high-stress subjects, who were excluded from further analysis. A novel method was used to adjust EEG frequency band power to remove stress artifacts, improving predictive model performance. Results Our results show that acute stress vary across EEG sessions, affecting model performance and accuracy. Generalization improved once these varying stress levels were considered and compensated for during model training. Our findings highlight the importance of thorough health screening and management of the patient's condition during the process. Stress induced during or by the EEG recording can adversely affect model generalization. This may require further preprocessing of data by treating stress as an additional physiological artifact. Our proposed approach to compensate for stress artifacts in EEG data used for training models showed a significant improvement in predictive performance.
Evolution and challenges of computer vision and deep learning technologies for analysing mixed construction and demolition waste
Langley, Adrian, Lonergan, Matthew, Huang, Tao, Azghadi, Mostafa Rahimi
Improving the automatic and timely recognition of construction and demolition waste (C&DW) composition is crucial for enhancing business returns, economic outcomes, and sustainability. Technologies like computer vision, artificial intelligence (AI), robotics, and internet of things (IoT) are increasingly integrated into waste processing to achieve these goals. While deep learning (DL) models show promise in recognising homogeneous C&DW piles, few studies assess their performance with mixed, highly contaminated material in commercial settings. Drawing on extensive experience at a C&DW materials recovery facility (MRF) in Sydney, Australia, we explore the challenges and opportunities in developing an advanced automated mixed C&DW management system. We begin with an overview of the evolution of waste management in the construction industry, highlighting its environmental, economic, and societal impacts. We review various C&DW analysis techniques, concluding that DL-based visual methods are the optimal solution. Additionally, we examine the progression of sensor and camera technologies for C&DW analysis as well as the evolution of DL algorithms focused on object detection and material segmentation. We also discuss C&DW datasets, their curation, and innovative methods for their creation. Finally, we share insights on C&DW visual analysis, addressing technical and commercial challenges, research trends, and future directions for mixed C&DW analysis. This paper aims to improve the efficiency of C&DW management by providing valuable insights for ongoing and future research and development efforts in this critical sector.
Sugarcane Health Monitoring With Satellite Spectroscopy and Machine Learning: A Review
Waters, Ethan Kane, Chen, Carla Chia-Ming, Azghadi, Mostafa Rahimi
Research into large-scale crop monitoring has flourished due to increased accessibility to satellite imagery. This review delves into previously unexplored and under-explored areas in sugarcane health monitoring and disease/pest detection using satellite-based spectroscopy and Machine Learning (ML). It discusses key considerations in system development, including relevant satellites, vegetation indices, ML methods, factors influencing sugarcane reflectance, optimal growth conditions, common diseases, and traditional detection methods. Many studies highlight how factors like crop age, soil type, viewing angle, water content, recent weather patterns, and sugarcane variety can impact spectral reflectance, affecting the accuracy of health assessments via spectroscopy. However, these variables have not been fully considered in the literature. In addition, the current literature lacks comprehensive comparisons between ML techniques and vegetation indices. We address these gaps in this review. We discuss that, while current findings suggest the potential for an ML-driven satellite spectroscopy system for monitoring sugarcane health, further research is essential. This paper offers a comprehensive analysis of previous research to aid in unlocking this potential and advancing the development of an effective sugarcane health monitoring system using satellite technology.
Precise Robotic Weed Spot-Spraying for Reduced Herbicide Usage and Improved Environmental Outcomes -- A Real-World Case Study
Azghadi, Mostafa Rahimi, Olsen, Alex, Wood, Jake, Saleh, Alzayat, Calvert, Brendan, Granshaw, Terry, Fillols, Emilie, Philippa, Bronson
Precise robotic weed control plays an essential role in precision agriculture. It can help significantly reduce the environmental impact of herbicides while reducing weed management costs for farmers. In this paper, we demonstrate that a custom-designed robotic spot spraying tool based on computer vision and deep learning can significantly reduce herbicide usage on sugarcane farms. We present results from field trials that compare robotic spot spraying against industry-standard broadcast spraying, by measuring the weed control efficacy, the reduction in herbicide usage, and the water quality improvements in irrigation runoff. The average results across 25 hectares of field trials show that spot spraying on sugarcane farms is 97% as effective as broadcast spraying and reduces herbicide usage by 35%, proportionally to the weed density. For specific trial strips with lower weed pressure, spot spraying reduced herbicide usage by up to 65%. Water quality measurements of irrigation-induced runoff, three to six days after spraying, showed reductions in the mean concentration and mean load of herbicides of 39% and 54%, respectively, compared to broadcast spraying. These promising results reveal the capability of spot spraying technology to reduce herbicide usage on sugarcane farms without impacting weed control and potentially providing sustained water quality benefits.
Ensemble Machine Learning Model Trained on a New Synthesized Dataset Generalizes Well for Stress Prediction Using Wearable Devices
Vos, Gideon, Trinh, Kelly, Sarnyai, Zoltan, Azghadi, Mostafa Rahimi
Introduction. We investigate the generalization ability of models built on datasets containing a small number of subjects, recorded in single study protocols. Next, we propose and evaluate methods combining these datasets into a single, large dataset. Finally, we propose and evaluate the use of ensemble techniques by combining gradient boosting with an artificial neural network to measure predictive power on new, unseen data. Methods. Sensor biomarker data from six public datasets were utilized in this study. To test model generalization, we developed a gradient boosting model trained on one dataset (SWELL), and tested its predictive power on two datasets previously used in other studies (WESAD, NEURO). Next, we merged four small datasets, i.e. (SWELL, NEURO, WESAD, UBFC-Phys), to provide a combined total of 99 subjects,. In addition, we utilized random sampling combined with another dataset (EXAM) to build a larger training dataset consisting of 200 synthesized subjects,. Finally, we developed an ensemble model that combines our gradient boosting model with an artificial neural network, and tested it on two additional, unseen publicly available stress datasets (WESAD and Toadstool). Results. Our method delivers a robust stress measurement system capable of achieving 85% predictive accuracy on new, unseen validation data, achieving a 25% performance improvement over single models trained on small datasets. Conclusion. Models trained on small, single study protocol datasets do not generalize well for use on new, unseen data and lack statistical power. Ma-chine learning models trained on a dataset containing a larger number of varied study subjects capture physiological variance better, resulting in more robust stress detection.
Security and Privacy Problems in Voice Assistant Applications: A Survey
Li, Jingjin, chen, Chao, Pan, Lei, Azghadi, Mostafa Rahimi, Ghodosi, Hossein, Zhang, Jun
Voice assistant applications have become omniscient nowadays. Two models that provide the two most important functions for real-life applications (i.e., Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR) models and Speaker Identification (SI) models. According to recent studies, security and privacy threats have also emerged with the rapid development of the Internet of Things (IoT). The security issues researched include attack techniques toward machine learning models and other hardware components widely used in voice assistant applications. The privacy issues include technical-wise information stealing and policy-wise privacy breaches. The voice assistant application takes a steadily growing market share every year, but their privacy and security issues never stopped causing huge economic losses and endangering users' personal sensitive information. Thus, it is important to have a comprehensive survey to outline the categorization of the current research regarding the security and privacy problems of voice assistant applications. This paper concludes and assesses five kinds of security attacks and three types of privacy threats in the papers published in the top-tier conferences of cyber security and voice domain.
Generalizable machine learning for stress monitoring from wearable devices: A systematic literature review
Vos, Gideon, Trinh, Kelly, Sarnyai, Zoltan, Azghadi, Mostafa Rahimi
Introduction. The stress response has both subjective, psychological and objectively measurable, biological components. Both of them can be expressed differently from person to person, complicating the development of a generic stress measurement model. This is further compounded by the lack of large, labeled datasets that can be utilized to build machine learning models for accurately detecting periods and levels of stress. The aim of this review is to provide an overview of the current state of stress detection and monitoring using wearable devices, and where applicable, machine learning techniques utilized. Methods. This study reviewed published works contributing and/or using datasets designed for detecting stress and their associated machine learning methods, with a systematic review and meta-analysis of those that utilized wearable sensor data as stress biomarkers. The electronic databases of Google Scholar, Crossref, DOAJ and PubMed were searched for relevant articles and a total of 24 articles were identified and included in the final analysis. The reviewed works were synthesized into three categories of publicly available stress datasets, machine learning, and future research directions. Results. A wide variety of study-specific test and measurement protocols were noted in the literature. A number of public datasets were identified that are labeled for stress detection. In addition, we discuss that previous works show shortcomings in areas such as their labeling protocols, lack of statistical power, validity of stress biomarkers, and generalization ability. Conclusion. Generalization of existing machine learning models still require further study, and research in this area will continue to provide improvements as newer and more substantial datasets become available for study.