Bettini, Claudio
Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation
Fiori, Michele, Civitarese, Gabriele, Choudhary, Priyankar, Bettini, Claudio
Explainable Artificial Intelligence (XAI) aims to uncover the inner reasoning of machine learning models. In IoT systems, XAI improves the transparency of models processing sensor data from multiple heterogeneous devices, ensuring end-users understand and trust their outputs. Among the many applications, XAI has also been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes. Existing approaches highlight which sensor events are most important for each predicted activity, using simple rules to convert these events into natural language explanations for non-expert users. However, these methods produce rigid explanations lacking natural language flexibility and are not scalable. With the recent rise of Large Language Models (LLMs), it is worth exploring whether they can enhance explanation generation, considering their proven knowledge of human activities. This paper investigates potential approaches to combine XAI and LLMs for sensor-based ADL recognition. We evaluate if LLMs can be used: a) as explainable zero-shot ADL recognition models, avoiding costly labeled data collection, and b) to automate the generation of explanations for existing data-driven XAI approaches when training data is available and the goal is higher recognition rates. Our critical evaluation provides insights into the benefits and challenges of using LLMs for explainable ADL recognition.
GNN-XAR: A Graph Neural Network for Explainable Activity Recognition in Smart Homes
Fiori, Michele, Mor, Davide, Civitarese, Gabriele, Bettini, Claudio
Sensor-based Human Activity Recognition (HAR) in smart home environments is crucial for several applications, especially in the healthcare domain. The majority of the existing approaches leverage deep learning models. While these approaches are effective, the rationale behind their outputs is opaque. Recently, eXplainable Artificial Intelligence (XAI) approaches emerged to provide intuitive explanations to the output of HAR models. To the best of our knowledge, these approaches leverage classic deep models like CNNs or RNNs. Recently, Graph Neural Networks (GNNs) proved to be effective for sensor-based HAR. However, existing approaches are not designed with explainability in mind. In this work, we propose the first explainable Graph Neural Network explicitly designed for smart home HAR. Our results on two public datasets show that this approach provides better explanations than state-of-the-art methods while also slightly improving the recognition rate.
Large Language Models are Zero-Shot Recognizers for Activities of Daily Living
Civitarese, Gabriele, Fiori, Michele, Choudhary, Priyankar, Bettini, Claudio
The sensor-based recognition of Activities of Daily Living (ADLs) in smart home environments enables several applications in the areas of energy management, safety, well-being, and healthcare. ADLs recognition is typically based on deep learning methods requiring large datasets to be trained. Recently, several studies proved that Large Language Models (LLMs) effectively capture common-sense knowledge about human activities. However, the effectiveness of LLMs for ADLs recognition in smart home environments still deserves to be investigated. In this work, we propose ADL-LLM, a novel LLM-based ADLs recognition system. ADLLLM transforms raw sensor data into textual representations, that are processed by an LLM to perform zero-shot ADLs recognition. Moreover, in the scenario where a small labeled dataset is available, ADL-LLM can also be empowered with few-shot prompting. We evaluated ADL-LLM on two public datasets, showing its effectiveness in this domain.
ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity Recognition Models
Arrotta, Luca, Bettini, Claudio, Civitarese, Gabriele, Fiori, Michele
Context-aware Human Activity Recognition (HAR) is a hot research area in mobile computing, and the most effective solutions in the literature are based on supervised deep learning models. However, the actual deployment of these systems is limited by the scarcity of labeled data that is required for training. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate this issue, by infusing common-sense knowledge about human activities and the contexts in which they can be performed into HAR deep learning classifiers. Existing NeSy methods for context-aware HAR rely on knowledge encoded in logic-based models (e.g., ontologies) whose design, implementation, and maintenance to capture new activities and contexts require significant human engineering efforts, technical knowledge, and domain expertise. Recent works show that pre-trained Large Language Models (LLMs) effectively encode common-sense knowledge about human activities. In this work, we propose ContextGPT: a novel prompt engineering approach to retrieve from LLMs common-sense knowledge about the relationship between human activities and the context in which they are performed. Unlike ontologies, ContextGPT requires limited human effort and expertise. An extensive evaluation carried out on two public datasets shows how a NeSy model obtained by infusing common-sense knowledge from ContextGPT is effective in data scarcity scenarios, leading to similar (and sometimes better) recognition rates than logic-based approaches with a fraction of the effort.
Combining Public Human Activity Recognition Datasets to Mitigate Labeled Data Scarcity
Presotto, Riccardo, Ek, Sannara, Civitarese, Gabriele, Portet, François, Lalanda, Philippe, Bettini, Claudio
The use of supervised learning for Human Activity Recognition (HAR) on mobile devices leads to strong classification performances. Such an approach, however, requires large amounts of labeled data, both for the initial training of the models and for their customization on specific clients (whose data often differ greatly from the training data). This is actually impractical to obtain due to the costs, intrusiveness, and time-consuming nature of data annotation. Moreover, even with the help of a significant amount of labeled data, model deployment on heterogeneous clients faces difficulties in generalizing well on unseen data. Other domains, like Computer Vision or Natural Language Processing, have proposed the notion of pre-trained models, leveraging large corpora, to reduce the need for annotated data and better manage heterogeneity. This promising approach has not been implemented in the HAR domain so far because of the lack of public datasets of sufficient size. In this paper, we propose a novel strategy to combine publicly available datasets with the goal of learning a generalized HAR model that can be fine-tuned using a limited amount of labeled data on an unseen target domain. Our experimental evaluation, which includes experimenting with different state-of-the-art neural network architectures, shows that combining public datasets can significantly reduce the number of labeled samples required to achieve satisfactory performance on an unseen target domain.
Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition
Arrotta, Luca, Civitarese, Gabriele, Bettini, Claudio
Deep Learning models are a standard solution for sensor-based Human Activity Recognition (HAR), but their deployment is often limited by labeled data scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate these issues by infusing knowledge about context information into HAR deep learning classifiers. However, existing NeSy methods for context-aware HAR require computationally expensive symbolic reasoners during classification, making them less suitable for deployment on resource-constrained devices (e.g., mobile devices). Additionally, NeSy approaches for context-aware HAR have never been evaluated on in-the-wild datasets, and their generalization capabilities in real-world scenarios are questionable. In this work, we propose a novel approach based on a semantic loss function that infuses knowledge constraints in the HAR model during the training phase, avoiding symbolic reasoning during classification. Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model. We also compare our solution with existing NeSy methods and analyze each approach's strengths and weaknesses. Our semantic loss remains the only NeSy solution that can be deployed as a single DNN without the need for symbolic reasoning modules, reaching recognition rates close (and better in some cases) to existing approaches.
SelfAct: Personalized Activity Recognition based on Self-Supervised and Active Learning
Arrotta, Luca, Civitarese, Gabriele, Valente, Samuele, Bettini, Claudio
Supervised Deep Learning (DL) models are currently the leading approach for sensor-based Human Activity Recognition (HAR) on wearable and mobile devices. However, training them requires large amounts of labeled data whose collection is often time-consuming, expensive, and error-prone. At the same time, due to the intra- and inter-variability of activity execution, activity models should be personalized for each user. In this work, we propose SelfAct: a novel framework for HAR combining self-supervised and active learning to mitigate these problems. SelfAct leverages a large pool of unlabeled data collected from many users to pre-train through self-supervision a DL model, with the goal of learning a meaningful and efficient latent representation of sensor data. The resulting pre-trained model can be locally used by new users, which will fine-tune it thanks to a novel unsupervised active learning strategy. Our experiments on two publicly available HAR datasets demonstrate that SelfAct achieves results that are close to or even better than the ones of fully supervised approaches with a small number of active learning queries.
AAAI 2000 Workshop Reports
Lesperance, Yves, Wagnerg, Gerd, Birmingham, William, Bollacke, Kurt r, Nareyek, Alexander, Walser, J. Paul, Aha, David, Finin, Tim, Grosof, Benjamin, Japkowicz, Nathalie, Holte, Robert, Getoor, Lise, Gomes, Carla P., Hoos, Holger H., Schultz, Alan C., Kubat, Miroslav, Mitchell, Tom, Denzinger, Joerg, Gil, Yolanda, Myers, Karen, Bettini, Claudio, Montanari, Angelo
The AAAI-2000 Workshop Program was held Sunday and Monday, 3031 July 2000 at the Hyatt Regency Austin and the Austin Convention Center in Austin, Texas. The 15 workshops held were (1) Agent-Oriented Information Systems, (2) Artificial Intelligence and Music, (3) Artificial Intelligence and Web Search, (4) Constraints and AI Planning, (5) Integration of AI and OR: Techniques for Combinatorial Optimization, (6) Intelligent Lessons Learned Systems, (7) Knowledge-Based Electronic Markets, (8) Learning from Imbalanced Data Sets, (9) Learning Statistical Models from Rela-tional Data, (10) Leveraging Probability and Uncertainty in Computation, (11) Mobile Robotic Competition and Exhibition, (12) New Research Problems for Machine Learning, (13) Parallel and Distributed Search for Reasoning, (14) Representational Issues for Real-World Planning Systems, and (15) Spatial and Temporal Granularity.
AAAI 2000 Workshop Reports
Lesperance, Yves, Wagnerg, Gerd, Birmingham, William, Bollacke, Kurt r, Nareyek, Alexander, Walser, J. Paul, Aha, David, Finin, Tim, Grosof, Benjamin, Japkowicz, Nathalie, Holte, Robert, Getoor, Lise, Gomes, Carla P., Hoos, Holger H., Schultz, Alan C., Kubat, Miroslav, Mitchell, Tom, Denzinger, Joerg, Gil, Yolanda, Myers, Karen, Bettini, Claudio, Montanari, Angelo
The AAAI-2000 Workshop Program was held Sunday and Monday, 3031 July 2000 at the Hyatt Regency Austin and the Austin Convention Center in Austin, Texas. The 15 workshops held were (1) Agent-Oriented Information Systems, (2) Artificial Intelligence and Music, (3) Artificial Intelligence and Web Search, (4) Constraints and AI Planning, (5) Integration of AI and OR: Techniques for Combinatorial Optimization, (6) Intelligent Lessons Learned Systems, (7) Knowledge-Based Electronic Markets, (8) Learning from Imbalanced Data Sets, (9) Learning Statistical Models from Rela-tional Data, (10) Leveraging Probability and Uncertainty in Computation, (11) Mobile Robotic Competition and Exhibition, (12) New Research Problems for Machine Learning, (13) Parallel and Distributed Search for Reasoning, (14) Representational Issues for Real-World Planning Systems, and (15) Spatial and Temporal Granularity.