D'Alfonso, Simon
AWARE Narrator and the Utilization of Large Language Models to Extract Behavioral Insights from Smartphone Sensing Data
Zhang, Tianyi, Kojima, Miu, D'Alfonso, Simon
These sensors include accelerometer, GPS/geolocation, Bluetooth, communication logs (phone and SMS), application usage and keyboard activity. Given their various sensors and the opportunities to utilise them, smartphones, the Swiss army knives of digital technology, have proven to be valuable personal sensing devices, with applications in domains such as health, education and leisure. Given their potential to track various health-related behaviours and user contexts, as well as the emergence of health apps, smartphone sensing has become a pivotal topic in digital health. This is particularly the case in digital mental health, where the concept of digital phenotyping has emerged in recent years. In short, digital phenotyping espouses the idea that the data created from our use of and interaction with digital technologies, such as smartphones, can be mined or analysed to infer behaviours and, ultimately assess mental health [1, 2]. The focus of our work in this paper is on leveraging smartphone sensing as a tool in psychology and mental health. Once raw sensor data is collected, it is typically processed into information features that can be used in statistical analyses and machine learning model construction. For instance, from raw geolocation data one, features such as total distance travelled or time spent at the most visited location can be derived. In this paper, however, we propose a novel approach to analyze smartphone sensing data.
Leveraging LLMs to Predict Affective States via Smartphone Sensor Features
Zhang, Tianyi, Teng, Songyan, Jia, Hong, D'Alfonso, Simon
As mental health issues for young adults present a pressing public health concern, daily digital mood monitoring for early detection has become an important prospect. An active research area, digital phenotyping, involves collecting and analysing data from personal digital devices such as smartphones (usage and sensors) and wearables to infer behaviours and mental health. Whilst this data is standardly analysed using statistical and machine learning approaches, the emergence of large language models (LLMs) offers a new approach to make sense of smartphone sensing data. Despite their effectiveness across various domains, LLMs remain relatively unexplored in digital mental health, particularly in integrating mobile sensor data. Our study aims to bridge this gap by employing LLMs to predict affect outcomes based on smartphone sensing data from university students. We demonstrate the efficacy of zero-shot and few-shot embedding LLMs in inferring general wellbeing. Our findings reveal that LLMs can make promising predictions of affect measures using solely smartphone sensing data. This research sheds light on the potential of LLMs for affective state prediction, emphasizing the intricate link between smartphone behavioral patterns and affective states. To our knowledge, this is the first work to leverage LLMs for affective state prediction and digital phenotyping tasks.
Enabling On-Device LLMs Personalization with Smartphone Sensing
Zhang, Shiquan, Ma, Ying, Fang, Le, Jia, Hong, D'Alfonso, Simon, Kostakos, Vassilis
This demo presents a novel end-to-end framework that combines on-device large language models (LLMs) with smartphone sensing technologies to achieve context-aware and personalized services. The framework addresses critical limitations of current personalization solutions via cloud-based LLMs, such as privacy concerns, latency and cost, and limited personal sensor data. To achieve this, we innovatively proposed deploying LLMs on smartphones with multimodal sensor data and customized prompt engineering, ensuring privacy and enhancing personalization performance through context-aware sensing. A case study involving a university student demonstrated the proposed framework's capability to provide tailored recommendations. In addition, we show that the proposed framework achieves the best trade-off in privacy, performance, latency, cost, battery and energy consumption between on-device and cloud LLMs. Future work aims to integrate more diverse sensor data and conduct large-scale user studies to further refine the personalization. We envision the proposed framework could significantly improve user experiences in various domains such as healthcare, productivity, and entertainment by providing secure, context-aware, and efficient interactions directly on users' devices.