Towards eXplainable AI for Mobility Data Science
Jalali, Anahid, Graser, Anita, Heistracher, Clemens
–arXiv.org Artificial Intelligence
XAI, or Explainable AI, develops Artificial Intelligence (AI) systems that can explain their decisions and actions. XAI thus promotes transparency and aims to enable trust in AI technologies [18]. While traditional interpretable machine learning (ML) approaches (such as Gaussian Mixture Models [10], K-Nearest Neighbors [3], and decision trees [23]) have been widely used to model geospatial (and spatiotemporal) phenomena and corresponding data, the increasing size and complexity of spatiotemporal data have raised the need for complex methods to model such data. Therefore, recent studies focused on using black-box models, often in the form of deep learning models [9, 11, 7, 8, 13, 2]. With this rise of Geospatial AI (GeoAI), there is a growing need for explainability, particularly for GeoAI applications where decisions can have significant social and environmental implications [5, 25, 4]. However, XAI research and development tends towards computer vision, natural language processing, and applications involving tabular data (such as healthcare and finance) [20] and few studies have deployed XAI approaches for GeoAI (GeoXAI) [11, 25].
arXiv.org Artificial Intelligence
Sep-7-2023