Detecting AI-Generated Images via Diffusion Snap-Back Reconstruction: A Forensic Approach
Ameen, Mohd Ruhul, Islam, Akif
–arXiv.org Artificial Intelligence
The rapid rise of generative diffusion models has made distinguishing authentic visual content from synthetic imagery increasingly challenging. Traditional deepfake detection methods, which rely on frequency or pixel-level artifacts, fail against modern text-to-image systems such as Stable Diffusion and DALL-E that produce photorealistic and artifact-free results. This paper introduces a diffusion-based forensic framework that leverages multi-strength image reconstruction dynamics, termed diffusion snap-back, to identify AI-generated images. By analysing how reconstruction metrics (LPIPS, SSIM, and PSNR) evolve across varying noise strengths, we extract interpretable manifold-based features that differentiate real and synthetic images. Evaluated on a balanced dataset of 4,000 images, our approach achieves 0.993 AUROC under cross-validation and remains robust to common distortions such as compression and noise. Despite using limited data and a single diffusion backbone (Stable Diffusion v1.5), the proposed method demonstrates strong generalization and interpretability, offering a foundation for scalable, model-agnostic synthetic media forensics.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Asia > Bangladesh (0.05)
- North America > United States
- West Virginia > Cabell County > Huntington (0.04)
- Genre:
- Research Report > New Finding (0.69)
- Industry:
- Information Technology > Security & Privacy (0.35)
- Media (0.47)
- Technology: