Prompting Large Language Models for Training-Free Non-Intrusive Load Monitoring
Xue, Junyu, Wang, Xudong, He, Xiaoling, Liu, Shicheng, Wang, Yi, Tang, Guoming
–arXiv.org Artificial Intelligence
Non-intrusive load monitoring (NILM) aims to disaggregate total electricity consumption into individual appliance usage, thus enabling more effective energy management. While deep learning has advanced NILM, it remains limited by its dependence on labeled data, restricted generalization, and lack of explainability. This paper introduces the first prompt-based NILM framework that leverages large language models (LLMs) with in-context learning. We design and evaluate prompt strategies that integrate appliance features, contextual information, and representative time-series examples through extensive case studies. Extensive experiments on the REDD and UK-DALE datasets show that LLMs guided solely by prompts deliver only basic NILM capabilities, with performance that lags behind traditional deep-learning models in complex scenarios. However, the experiments also demonstrate strong generalization across different houses and even regions by simply adapting the injected appliance features. It also provides clear, human-readable explanations for the inferred appliance states. Our findings define the capability boundaries of using prompt-only LLMs for NILM tasks. Their strengths in generalization and explainability present a promising new direction for the field.
arXiv.org Artificial Intelligence
Aug-5-2025
- Country:
- Asia > China
- Guangdong Province
- Hong Kong (0.04)
- Europe > United Kingdom (0.14)
- North America > United States
- California > San Diego County > San Diego (0.04)
- Asia > China
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Energy > Power Industry (1.00)