Goto

Collaborating Authors

 Bai, Nan


MentalGLM Series: Explainable Large Language Models for Mental Health Analysis on Chinese Social Media

arXiv.org Artificial Intelligence

As the prevalence of mental health challenges, social media has emerged as a key platform for individuals to express their emotions.Deep learning tends to be a promising solution for analyzing mental health on social media. However, black box models are often inflexible when switching between tasks, and their results typically lack explanations. With the rise of large language models (LLMs), their flexibility has introduced new approaches to the field. Also due to the generative nature, they can be prompted to explain decision-making processes. However, their performance on complex psychological analysis still lags behind deep learning. In this paper, we introduce the first multi-task Chinese Social Media Interpretable Mental Health Instructions (C-IMHI) dataset, consisting of 9K samples, which has been quality-controlled and manually validated. We also propose MentalGLM series models, the first open-source LLMs designed for explainable mental health analysis targeting Chinese social media, trained on a corpus of 50K instructions. The proposed models were evaluated on three downstream tasks and achieved better or comparable performance compared to deep learning models, generalized LLMs, and task fine-tuned LLMs. We validated a portion of the generated decision explanations with experts, showing promising results. We also evaluated the proposed models on a clinical dataset, where they outperformed other LLMs, indicating their potential applicability in the clinical field. Our models show strong performance, validated across tasks and perspectives. The decision explanations enhance usability and facilitate better understanding and practical application of the models. Both the constructed dataset and the models are publicly available via: https://github.com/zwzzzQAQ/MentalGLM.


Augmented Computational Design: Methodical Application of Artificial Intelligence in Generative Design

arXiv.org Artificial Intelligence

The core of the performance-driven computational design is to trace the sensitivity of variations of some performance indicators to the differences between design alternatives. Therefore any argument about the utility of AI for performancebased design must necessarily discuss the representation of such differences, as explicitly as possible. The existing data models and data representations in the field of Architecture, Engineering, and Construction (AEC), such as CAD and BIM are heavily focused on geometrically representing building elements and facilitating the process of construction management. Unfortunately, the field of AEC does not currently have a structured discourse based on an explicit representation of decision variables and outcomes of interest. Specifically, the notion of design representation and the idea of data modelling for representing "what needs to be attained from buildings" is rather absent in the literature.