He, Hongjie
Instructor-Worker Large Language Model System for Policy Recommendation: a Case Study on Air Quality Analysis of the January 2025 Los Angeles Wildfires
Gao, Kyle, Lu, Dening, Li, Liangzhi, Chen, Nan, He, Hongjie, Xu, Linlin, Li, Jonathan
The Los Angeles wildfires of January 2025 caused more than 250 billion dollars in damage and lasted for nearly an entire month before containment. Following our previous work, the Digital Twin Building, we modify and leverage the multi-agent large language model framework as well as the cloud-mapping integration to study the air quality during the Los Angeles wildfires. Recent advances in large language models have allowed for out-of-the-box automated large-scale data analysis. We use a multi-agent large language system comprised of an Instructor agent and Worker agents. Upon receiving the users' instructions, the Instructor agent retrieves the data from the cloud platform and produces instruction prompts to the Worker agents. The Worker agents then analyze the data and provide summaries. The summaries are finally input back into the Instructor agent, which then provides the final data analysis. We test this system's capability for data-based policy recommendation by assessing our Instructor-Worker LLM system's health recommendations based on air quality during the Los Angeles wildfires.
CausalVE: Face Video Privacy Encryption via Causal Video Prediction
Huang, Yubo, Feng, Wenhao, Lai, Xin, Wang, Zixi, Xu, Jingzehua, Zhang, Shuai, He, Hongjie, Chen, Fan
Advanced facial recognition technologies and recommender systems with inadequate privacy technologies and policies for facial interactions increase concerns about bioprivacy violations. With the proliferation of video and live-streaming websites, public-face video distribution and interactions pose greater privacy risks. Existing techniques typically address the risk of sensitive biometric information leakage through various privacy enhancement methods but pose a higher security risk by corrupting the information to be conveyed by the interaction data, or by leaving certain biometric features intact that allow an attacker to infer sensitive biometric information from them. To address these shortcomings, in this paper, we propose a neural network framework, CausalVE. We obtain cover images by adopting a diffusion model to achieve face swapping with face guidance and use the speech sequence features and spatiotemporal sequence features of the secret video for dynamic video inference and prediction to obtain a cover video with the same number of frames as the secret video. In addition, we hide the secret video by using reversible neural networks for video hiding so that the video can also disseminate secret data. Numerous experiments prove that our CausalVE has good security in public video dissemination and outperforms state-of-the-art methods from a qualitative, quantitative, and visual point of view. With the widespread adoption of smart devices and the Internet of Things (IoT), the security issues of biological face privacy are becoming increasingly unavoidable. The explosion of public face video distribution for IoT, exemplified by YouTube, TikTok, and Instagram, makes it difficult to protect face privacy during video interaction and distribution. In addition, the autonomy of public face video distribution and interaction on video websites means that disguised face videos must convey the same visual video information effect as the original video and hide sensitive personal privacy information. Current face privacy measures mainly focus on destroying or hiding facial attributes. In video sequences, face attributes are destroyed by replacing the region where the person is located with blank information (Newton et al., 2005; Meden et al., 2018) or by blurring and pixellating face attributes from the detector (Sarwar et al., 2018). These methods directly damage the biometric features in facial videos, destroying the usability of data interactions and even failing to leave any useful information in interactions and propagation.