Supplementary Materials: Rethinking Alignment in Video Super-Resolution Transformers

Neural Information Processing Systems 

The proposed patch alignment method can also be applied to the recurrent VSR framework. VSR and have achieved the state-of-the-art performance. Transformer backbone, we can easily build a recurrent VSR Transformer. Alignment modules are not absent in the existing recurrent methods. The feature size is set to 100, and the number of attention heads is 4. The baseline is the original BasicVSR++ model that uses FGDC and CNN backbone.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found