MiVID: Multi-Strategic Self-Supervision for Video Frame Interpolation using Diffusion Model
Srivastava, Priyansh, Chatterjee, Romit, Sen, Abir, Behura, Aradhana, Dash, Ratnakar
–arXiv.org Artificial Intelligence
Noname manuscript No. (will be inserted by the editor) Abstract Video Frame Interpolation (VFI) remains a cornerstone in video enhancement, enabling temporal upscaling for tasks like slow-motion rendering, frame rate conversion, and video restoration. While classical methods rely on optical flow and learning-based models assume access to dense ground-truth, both struggle with occlusions, domain shifts, and ambiguous motion. This article introduces MiVID, a lightweight, self-supervised, diffusion-based framework for video interpolation. Our model eliminates the need for explicit motion estimation by combining a 3D U-Net backbone with transformer-style temporal attention, trained under a hybrid masking regime that simulates occlusions and motion uncertainty. The use of cosine-based progressive masking and adaptive loss scheduling allows our network to learn robust spatiotemporal representations without any high-frame-rate supervision.Our frame-Priyansh Srivastava School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India E-mail: priyansh0305@gmail.com Romit Chatterjee School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India E-mail: chatterjeeromit86@gmail.com Abir Sen (Corresponding Author) School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India E-mail: abir.senfcs@kiit.ac.in MiVID is trained entirely on CPU using the datasets and 9-frame video segments, making it a low-resource yet highly effective pipeline.
arXiv.org Artificial Intelligence
Nov-11-2025