LAID: Lightweight AI-Generated Image Detection in Spatial and Spectral Domains

Chivaran, Nicholas, Ni, Jianbing

arXiv.org Artificial Intelligence 

--The recent proliferation of photorealistic AIgenerated images (AIGI) has raised urgent concerns about their potential misuse, particularly on social media platforms. Current state-of-the-art AIGI detection methods typically rely on large, deep neural architectures, creating significant computational barriers to real-time, large-scale deployment on platforms like social media. T o challenge this reliance on computationally intensive models, we introduce LAID, the first framework--to our knowledge--that benchmarks and evaluates the detection performance and efficiency of off-the-shelf lightweight neural networks. In this framework, we comprehensively train and evaluate selected models on a representative subset of the GenImage dataset across spatial, spectral, and fusion image domains. Our results demonstrate that lightweight models can achieve competitive accuracy, even under adversarial conditions, while incurring substantially lower memory and computation costs compared to current state-of-the-art methods. This study offers valuable insight into the trade-off between efficiency and performance in AIGI detection and lays a foundation for the development of practical, scalable, and trustworthy detection systems. The source code of LAID can be found at: https://github.com/nchivar/LAID. The rapid advancement of deep generative models such as Diffusion Models (DMs), Generative Adversarial Networks (GANs), and V ariational Autoencoders (V AEs) has enabled the generation of highly photorealistic synthetic imagery.