Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning

Open in new window