Supplementary Materials: Rethinking Alignment in Video Super-Resolution Transformers
–Neural Information Processing Systems
The proposed patch alignment method can also be applied to the recurrent VSR framework. VSR and have achieved the state-of-the-art performance. Transformer backbone, we can easily build a recurrent VSR Transformer. Alignment modules are not absent in the existing recurrent methods. The feature size is set to 100, and the number of attention heads is 4. The baseline is the original BasicVSR++ model that uses FGDC and CNN backbone.
Neural Information Processing Systems
Aug-19-2025, 16:13:01 GMT
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (0.48)
- Vision (0.49)
- Information Technology > Artificial Intelligence