Plotting

 Jalali, Anahid


MobilityDL: A Review of Deep Learning From Trajectory Data

arXiv.org Artificial Intelligence

Trajectory data combines the complexities of time series, spatial data, and (sometimes irrational) movement behavior. As data availability and computing power have increased, so has the popularity of deep learning from trajectory data. This review paper provides the first comprehensive overview of deep learning approaches for trajectory data. We have identified eight specific mobility use cases which we analyze with regards to the deep learning models and the training data used. Besides a comprehensive quantitative review of the literature since 2018, the main contribution of our work is the data-centric analysis of recent work in this field, placing it along the mobility data continuum which ranges from detailed dense trajectories of individual movers (quasi-continuous tracking data), to sparse trajectories (such as check-in data), and aggregated trajectories (crowd information).


Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis

arXiv.org Artificial Intelligence

Post-hoc explainability methods aim to clarify predictions of black-box machine learning models. However, it is still largely unclear how well users comprehend the provided explanations and whether these increase the users ability to predict the model behavior. We approach this question by conducting a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP. Moreover, we investigate the effect of counterfactual explanations and misclassifications on users ability to understand and predict the model behavior. We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary. Furthermore, we find that counterfactual explanations and misclassifications can significantly increase the users understanding of how a machine learning model is making decisions. Based on our findings, we also derive design recommendations for future post-hoc explainability methods with increased comprehensibility and predictability.


Towards eXplainable AI for Mobility Data Science

arXiv.org Artificial Intelligence

XAI, or Explainable AI, develops Artificial Intelligence (AI) systems that can explain their decisions and actions. XAI thus promotes transparency and aims to enable trust in AI technologies [18]. While traditional interpretable machine learning (ML) approaches (such as Gaussian Mixture Models [10], K-Nearest Neighbors [3], and decision trees [23]) have been widely used to model geospatial (and spatiotemporal) phenomena and corresponding data, the increasing size and complexity of spatiotemporal data have raised the need for complex methods to model such data. Therefore, recent studies focused on using black-box models, often in the form of deep learning models [9, 11, 7, 8, 13, 2]. With this rise of Geospatial AI (GeoAI), there is a growing need for explainability, particularly for GeoAI applications where decisions can have significant social and environmental implications [5, 25, 4]. However, XAI research and development tends towards computer vision, natural language processing, and applications involving tabular data (such as healthcare and finance) [20] and few studies have deployed XAI approaches for GeoAI (GeoXAI) [11, 25].


Low-complexity deep learning frameworks for acoustic scene classification using teacher-student scheme and multiple spectrograms

arXiv.org Artificial Intelligence

In this technical report, a low-complexity deep learning system for acoustic scene classification (ASC) is presented. The proposed system comprises two main phases: (Phase I) Training a teacher network; and (Phase II) training a student network using distilled knowledge from the teacher. In the first phase, the teacher, which presents a large footprint model, is trained. After training the teacher, the embeddings, which are the feature map of the second last layer of the teacher, are extracted. In the second phase, the student network, which presents a low complexity model, is trained with the embeddings extracted from the teacher. Our experiments conducted on DCASE 2023 Task 1 Development dataset have fulfilled the requirement of low-complexity and achieved the best classification accuracy of 57.4%, improving DCASE baseline by 14.5%.


Minimal-Configuration Anomaly Detection for IIoT Sensors

arXiv.org Artificial Intelligence

The increasing deployment of low-cost IoT sensor platforms in industry boosts the demand for anomaly detection solutions that fulfill two key requirements: minimal configuration effort and easy transferability across equipment. Recent advances in deep learning, especially long-short-term memory (LSTM) and autoencoders, offer promising methods for detecting anomalies in sensor data recordings. We compared autoencoders with various architectures such as deep neural networks (DNN), LSTMs and convolutional neural networks (CNN) using a simple benchmark dataset, which we generated by operating a peristaltic pump under various operating conditions and inducing anomalies manually. Our preliminary results indicate that a single model can detect anomalies under various operating conditions on a four-dimensional data set without any specific feature engineering for each operating condition. We consider this work as being the first step towards a generic anomaly detection method, which is applicable for a wide range of industrial equipment.


Predicting Time-to-Failure of Plasma Etching Equipment using Machine Learning

arXiv.org Machine Learning

Predicting unscheduled breakdowns of plasma etching equipment can reduce maintenance costs and production losses in the semiconductor industry. However, plasma etching is a complex procedure and it is hard to capture all relevant equipment properties and behaviors in a single physical model. Machine learning offers an alternative for predicting upcoming machine failures based on relevant data points. In this paper, we describe three different machine learning tasks that can be used for that purpose: (i) predicting Time-To-Failure (TTF), (ii) predicting health state, and (iii) predicting TTF intervals of an equipment. Our results show that trained machine learning models can outperform benchmarks resembling human judgments in all three tasks. This suggests that machine learning offers a viable alternative to currently deployed plasma etching equipment maintenance strategies and decision making processes.