Negotiating Comfort: Simulating Personality-Driven LLM Agents in Shared Residential Social Networks

Rende, Ann Nedime Nese, Yilmaz, Tolga, Ulusoy, Özgür

arXiv.org Artificial Intelligence 

We use generative agents powered by large language models (LLMs) to simulate a social network in a shared residential building, driving the temperature decisions for a central heating system. Agents, divided into Family Members and Representatives, consider personal preferences, personal traits, connections, and weather conditions. Daily simulations involve family-level consensus followed by building-wide decisions among representatives. We tested three personality traits distributions (positive, mixed, and negative) and found that positive traits correlate with higher happiness and stronger friendships. Temperature preferences, assertiveness, and selflessness have a significant impact on happiness and decisions. This work demonstrates how LLM-driven agents can help simulate nuanced human behavior where complex real-life human simulations are di fficult to set. Introduction Social network simulations are widely utilized to model the interactions between people, often relying on agent-based modeling to represent the relationship between people and their environment. In these simulations, the actions of the agents are selected from a predefined set of rules specified by the modeler. While this rule-based approach allows for a clear definition of the decision process and provides control over the outcomes, it also introduces a limitation. The predefined rules may not be able to model various dimensions of human behavior, such as irrational decision-making, and restrict the knowledge of agents to what is encoded by the modeler. Large language models (LLMs) are trained on a vast amount of data, mostly obtained from web pages (Wang et al., 2024). Learning from this human-generated data allows the models to have a level of real-world knowledge and reduces the amount of external information that is required to be given to perform various tasks. Moreover, with their abilities such as reasoning and role-playing, LLMs have previously been shown to have significant capabilities of simulating human-like behavior. Generative agents, as introduced in Park et al. (2023), rely on LLMs to generate agent behaviors, based on agent-specific memory about the agent's identity, interactions with the other agents, and the environment.