Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval
Abstract
Our objective in this work is video-text retrieval - in particular a joint embedding that enables efficient text-to-video retrieval. The challenges in this area include the design of the visual architecture and the nature of the training data, in that the available large scale video-text training datasets, such as HowTo100M, are noisy and hence competitive performance is achieved only at scale through large amounts of compute. We address both these challenges in this paper. We propose an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. Our model is an adaptation and extension of the recent ViT and Timesformer architectures, and consists of attention in both space and time. The model is flexible and can be trained on both image and video text datasets, either independently or in conjunction. It is trained with a curriculum learning schedule that begins by treating images as 'frozen' snapshots of video, and then gradually learns to attend to increasing temporal context when trained on video datasets. We also provide a new video-text pretraining dataset WebVid-2M, comprised of over two million videos with weak captions scraped from the internet. Despite training on datasets that are an order of magnitude smaller, we show that this approach yields state-of-the-art results on standard downstream video-retrieval benchmarks including MSR-VTT, MSVD, DiDeMo and LSMDC.
Community
From - https://github.com/m-bain/webvid
This paper presents an interesting dataset, webvid, which contains 10 million video-text pairs scraped from the stock footage sites. According to Papers with Code, at least 49 papers are using this dataset, including Make a Video, the popular video generation model from Meta and the recent ModelScope Text to Video synthesis model.
The dataset repo has a series of terms.
- non-commercial research license, for 12 months
- not participate or encourage the participation in any illegal, deceptive, misleading or unethical practice, including, but without limitation, disparagement of the content or any other practices which may be detrimental to the same or the University.
At the same time, the authors don't have copyright to the license, as stated afterwards in the README
.
We do not own the copyright to any of the collected data and its use is authorised via the Intellectual Property Office’s Exceptions to Copyright for Non-Commercial Research and Private Study.
The document above clearly states that usage of this data is allowed for non-commercial research. As per this, my understanding is that models trained with this is ok, but it should be done with research purposes, and hence the model license should be a non-commercial. I think models such as https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis should update their license to reflect that, but I'm not a lawyer nor an expert in the topic (just sharing my personal thoughts).
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper