corpusid
int64 110
268M
| title
stringlengths 0
8.56k
| abstract
stringlengths 0
18.4k
| citations
sequencelengths 0
142
| full_paper
stringlengths 0
635k
|
---|---|---|---|---|
252,715,594 | PHENAKI: VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS | We present Phenaki, a model capable of realistic video synthesis, given a sequence of textual prompts. Generating videos from text is particularly challenging due to the computational cost, limited quantities of high quality text-video data and variable length of videos. To address these issues, we introduce a new model for learning video representation which compresses the video to a small representation of discrete tokens. This tokenizer uses causal attention in time, which allows it to work with variable-length videos. To generate video tokens from text we are using a bidirectional masked transformer conditioned on pre-computed text tokens. The generated video tokens are subsequently de-tokenized to create the actual video. To address data issues, we demonstrate how joint training on a large corpus of image-text pairs as well as a smaller number of video-text examples can result in generalization beyond what is available in the video datasets. Compared to the previous video generation methods, Phenaki can generate arbitrary long videos conditioned on a sequence of prompts (i.e. time variable text or a story) in open domain. To the best of our knowledge, this is the first time a paper studies generating videos from time variable prompts. In addition, compared to the perframe baselines, the proposed video encoder-decoder computes fewer tokens per video but results in better spatio-temporal consistency. ‡ Equal contribution. | [
6628106,
174802916,
238582653
] | PHENAKI: VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS
Ruben Villegas
University of Michigan
University College London
Google Brain
University of Michigan
University College London
Mohammad Babaeizadeh
University of Michigan
University College London
Google Brain
University of Michigan
University College London
Pieter-Jan Kindermans
University of Michigan
University College London
Google Brain
University of Michigan
University College London
Hernan Moraldo [email protected]
University of Michigan
University College London
Google Brain
University of Michigan
University College London
Han Zhang [email protected]
University of Michigan
University College London
Google Brain
University of Michigan
University College London
Mohammad Taghi
University of Michigan
University College London
Saffar Google Brain
University of Michigan
University College London
Santiago Castro [email protected]
University of Michigan
University College London
Julius Kunze
University of Michigan
University College London
Dumitru Erhan [email protected]
University of Michigan
University College London
Google Brain
University of Michigan
University College London
PHENAKI: VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS
We present Phenaki, a model capable of realistic video synthesis, given a sequence of textual prompts. Generating videos from text is particularly challenging due to the computational cost, limited quantities of high quality text-video data and variable length of videos. To address these issues, we introduce a new model for learning video representation which compresses the video to a small representation of discrete tokens. This tokenizer uses causal attention in time, which allows it to work with variable-length videos. To generate video tokens from text we are using a bidirectional masked transformer conditioned on pre-computed text tokens. The generated video tokens are subsequently de-tokenized to create the actual video. To address data issues, we demonstrate how joint training on a large corpus of image-text pairs as well as a smaller number of video-text examples can result in generalization beyond what is available in the video datasets. Compared to the previous video generation methods, Phenaki can generate arbitrary long videos conditioned on a sequence of prompts (i.e. time variable text or a story) in open domain. To the best of our knowledge, this is the first time a paper studies generating videos from time variable prompts. In addition, compared to the perframe baselines, the proposed video encoder-decoder computes fewer tokens per video but results in better spatio-temporal consistency. ‡ Equal contribution.
INTRODUCTION
It is now possible to generate realistic high resolution images given a description [34,35,32,38,59], but generating high quality videos from text remains challenging. In essence, videos are just a sequence of images, but this does not mean that generating a long coherent video is easy. In practice, it is a significantly harder task because there is much less high quality data available and the computational requirements are much more severe [9]. For image generation, there are datasets with billions of image-text pairs (such as LAION-5B [41] and JFT4B [60]) while the text-video datasets are substantially smaller e.g. WebVid [4] with ∼10M videos, which is not enough given the higher complexity of open domain videos. As for computation, training current state-of-theart image generation models is already pushing the state-of-the-art computational capabilities [59], leaving little to no room for generating videos, particularly videos of variable length.
To make the matters worse, one can argue that a single short text prompt is not sufficient to provide a complete description of a video (except for short clips), and instead, a generated video must be conditioned on a sequence of prompts, or a story, which narrates what happens over time. Ideally, Figure 1. Time variable text (i.e. story) conditional video generation. The entire figure is one continuous video generated auto-regressively. We start by generating the video conditioned on the first prompt and then after a couple of frames we change the prompt to the next one. Each row contains a selected number of frames (from left to right in order) while the model was conditioned on that particular prompt. The model manages to preserve the temporal coherence of the video while adapting to the new prompt, usually taking the shortest path for the adaption (notice the morphing of the teddy bear to the panda). Please note that the generated video has complex visual features such as reflections, occlusions, interactions and scene transitions. Full video is available at phenaki.github.io. a video generation model must be able to generate videos of arbitrary length, all the while having the capability of conditioning the generated frames at time t on prompts at time t that can vary over time. Such capability can clearly distinguish the video from a "moving image" and open up the way to real-world creative applications in art, design and content creation. To the best our knowledge, story based conditional video generation has never been explored before and this is the first paper to take early steps towards that goal. A traditional deep learning approach of simply learning this task from data is not possible, since there is no story-based dataset to learn from. Instead, to achieve this we rely on a model that is designed specifically with this capability in mind.
In this paper, we introduce Phenaki, a text to video model trained on both text to video and text to image data that can:
-Generate temporally coherent and diverse videos conditioned on open domain prompts even when the prompt is a new composition of concepts (Fig. 3). The videos can be long (minutes) even though the model is trained on 1.4 seconds videos (at 8 fps).
-Generate videos conditioned on a story (i.e. a sequence of prompts), e.g. Fig. 1 The embeddings of images and video patches from raw frames x are processed by a spatial and then a causal transformer (auto-regressive in time) to generate video tokens z. Center: MaskGiT is trained to reconstruct masked tokens z predicted by a frozen C-ViViT encoder and conditioned on T5X tokens of a given prompt p 0 . Right: How Phenaki can generate arbitrary long videos by freezing the past token and generating the future tokens. The prompt can change over time to enable time-variable prompt (i.e. story) conditional generation. The subscripts represent time (i.e. frame number).
To enable these capabilities, we could not rely on current video encoders, because they either can only decode fixed size videos or they encode frames independently. Hence, we introduce C-ViViT , a novel encoder-decoder architecture that:
-Exploits temporal redundancy in videos to improve reconstruction quality over a per frame model while compressing the number of video tokens by 40% or more.
-Allows encoding and decoding of variable length videos given its causal structure.
THE PHENAKI MODEL
Inspired by the previous work in auto-regressive text to image [34,59,38] and text to video [54,53,18], Phenaki is designed with two main components (see Figure 2): an encoder-decoder model which compresses videos to discrete embeddings (i.e. tokens) and a transformer model to translate text embeddings to video tokens. To get the text embeddings, Phenaki uses a pre-trained language model, T5X [37]. We will discuss each one of these components in the following subsections.
ENCODER-DECODER VIDEO MODEL: C-VIVIT
One of the primary challenges for generating video from text, is to get a compressed representation of videos. Previous work on text to video either use per-frame image encoders [18,54,57] such as VQ-GAN [12] or fixed length video encoders [52] such as VideoVQVAE [49]. The former allows for generating videos of arbitrary length, however in practice, the videos have to be short because the encoder does not compress the videos in time and the tokens are highly redundant in consecutive frames. The latter is more efficient in the number of tokens but it does not allow to generate variable length videos. In Phenaki, our goal is to generate videos of variable length while keeping the number of video tokens to a minimum so they can be modeled with a transformer within current computational limitations. To do so, we introduce C-ViViT , a causal variation of ViViT [1] with additional architectural changes for video generation, which can compress the videos in temporal and spatial dimensions, while staying auto-regressive in time, This capability allows for generating videos of arbitrary length auto-regressively.
Encoder architecture: As illustrated in Figure 2, we start with a video sequence of t x + 1 frames with a resolution of w x × h x and c x channels: x ∈ R (tx+1)×hx×wx×cx . This sequence will be compressed into a token representation of size (t z + 1) × w z × h z where the first w z × h z tokens represent the first frame independently from the rest of the video, and the remaining tokens represent spatio-temporal video tokens that auto-regressively depend on previous frames. To do so, we extract non-overlapping image patches of size w p × h p × c p from the first frame and video patches of size t p × w p × h p × c p from the rest of the video. We typically use all channels at once such that the number of patches equals the number of video tokens t z = tx tp , w z = wx wp and h z = hx hp . Each of these patches is flattened and linearly projected into a d z dimensional space. We combine the spatial dimensions to have a tensor of shape (t z +1)×w z * h z ×d z where the spatial and temporal dimensions are separated. Then multiple transformer layers are applied along the spatial dimensions with allto-all attention. This is followed by multiple transformer layers over the temporal dimension with causal attention such that each spatial token only observes spatial tokens from previous frames in an auto-regressive manner. The effect of this is that the first frame can be completely independently encoded. This opens up the possibility of text to image training to be embedded naturally into our video model. The second advantage is that we can condition the video generation process on a number of starting frames. The resulting patch embeddings z of shape t z × w z × h z × d z are then tokenized into learned codewords c z by vector quantization. The codebook learning will be discussed later together with the losses.
Decoder architecture: The C-ViViT decoder is simply an upside down version of the encoder. First tokens are transformed into embeddings. This is followed by the temporal transformer, then the spatial transformer. After the output of the spatial transformer, we apply a single linear projection without activation to map the tokens back to pixel space.
Quantization and Losses:
To learn a discrete latent space, we quantize our encoder outputs into the entries of a learned codebook via the vector quantization (VQ) objective in VQVAEs [45],
L VQ = sg(z) − e 2 2 + β z − sg(e) 2 2 ,(1)
where sg(x) ≡ x, and d dx sg(x) ≡ 0 is the stop-gradient operator, β is the commitment loss weight, and e is a codebook vector from codebook E. The index to the codebook vector closest to z is found by i = argmin j z − E j 2 2 . In addition to the VQ objective, we adopt the factorized and 2normalized codes from ViT-VQGAN [58] to improve codebook usage and reconstruction quality.
To train our model, we use a combination of L 2 loss, image perceptual loss L IP [20,61], video perceptual loss L VP by using the I3D network [6] as feature extractor, and adversarial loss L Adv with StyleGAN architecture [21]. As training objective, we use the following
L = L VQ + 0.1 × L Adv + 0.1 × L IP + 1.0 × L VP + 1.0 × L 2 .(2)
Novelty over the ViViT architecture: While our proposed C-ViViT architecture is inspired by the factorized encoder in ViViT [1], we modify their architecture to enable self-supervised learning from unlabeled videos. We first remove the [CLS] tokens in the spatial and the temporal transformers. Next, we apply temporal transformer for all spatial tokens computed by the spatial encoder, in contrast to single run of the temporal transformer over the [CLS] tokens in ViViT. Most importantly, the ViViT encoder requires a fixed length video input due to the all-to-all attention in time. Therefore, we apply causal attention instead such that our C-ViViT encoder becomes autoregressive and allows for a variable number of input frames which are necessary to learn from image datasets, and auto-regressively extrapolate video or single frames into the future.
TEXT-TO-VIDEO GENERATION WITH BIDIRECTIONAL TRANSFORMERS
In this stage, the text-to-video task can be formulated as a sequence-to-sequence problem to predict video tokens given the paired text embeddings. Most of recent methods [34,59,54,18] adopt a transformer model for these sequence-to-sequence tasks. In their models, they use an auto-regressive transformer which predicts the image or video tokens sequentially given the encoded text features. As a result, the sampling time scales linearly with the sequence length, even when caching is used. This becomes impractical for long video sequence generation.
Masked bidirectional transformer:
In this work, we aim to reduce the sampling time by having a small and fixed sampling step disregarding different video sequence lengths. Inspired by previous work for image generation [8], we use a bidirectional transformer since it can predict different video tokens simultaneously. For training step i, we first sample a mask ratio γ i from 0 to 1 and randomly replace γ i · N tokens with the special token [MASK], where N is the video sequence length. Then we learn the model parameters by minimizing the cross entropy loss on those masked tokens given the encoded text embeddings and unmasked video tokens. During inference, we first label all of the video tokens as the special token [MASK]. Then, at each inference step, we predict all the masked (unknown) video tokens in parallel conditioned on the text embeddings and unmasked (predicted) video tokens. We keep a ratio β i of the predicted tokens at sampling step i and the remaining tokens are re-masked and re-predicted in the next step.
As discussed in MaskGIT [8], the masking schedule γ i and sampling schedule β i have a significant effect on the samples quality therefore we follow the same strategies. Compared to an autoregressive transformer, the number of sampling steps is an order-of-magnitude smaller (typically we use values in the range of 12 to 48). Generally speaking, more sampling steps improves the quality.
Losses and training strategies: Given a pre-trained C-ViViT , videos are encoded into codebook ids a of shape (t z + 1) × w z × h z which are flattened into a long vector using the raster ordering from [58]. We then model the text-conditional video token distribution using Masked Visual Token Modeling (MVTM) [8]:
L mask = − ∀i∈[1,N ],mi=1 log p(a i |aM , p),(3)
where aM represents the masked version of a, m i is a binary variable indicating whether a i is masked or not, N is the number of video tokens, and p is the text condition embedding. In addition to the MVTM objective, we train using classifier-free guidance by dropping the text condition 10% of the time during training [16,59] . Finally, we dynamically adjust the MVTM objective during training to allow the use of image and video datasets as a single large dataset. We achieve this by only applying the masking ratio and objective on the first w z × h z tokens if only a single frame is given or over all video tokens if a full video is given. This mixed image and video dataset training strategy allows our models to learn concepts only present in image datasets, and transfer them to concepts present video datasets (e.g., the pencil drawing styled video of the panda in Figure.3).
Inference and auto-regressive generation of long videos: At inference time, we sample videos tokens by the same iterative process used in [8] with classifier-free guidance scale λ to control alignment between the generation and the text condition. Once the first video is generated, we can extrapolate additional frames auto-regressively by encoding the last K generated frames in the last video using C-ViViT , initializing MaskGIT with the tokens computed by our C-ViViT encoder, and proceed to generate the remaining video tokens conditioned on a text input. During video extrapolation, the text condition can be the same or a different one which enables our model to dynamically create visual transitions between the previous and current text condition visual content, effective generating a visual story an described by the input text.
EXPERIMENTS
To evaluate Phenaki, we test it on the following tasks: 1) text conditional video generation, 2) textimage conditional video generation, 3) time variable text conditional video generation (i.e.) story mode, 4) video quantization and 5) image conditional video generation a.k.a. video prediction.
To the best of our knowledge, 3) time variable text conditional video generation has not been explored in prior work. Given the dynamic nature of videos, we highly encourage readers to visit phenaki.github.io to check the generated videos. The website also includes qualitative comparisons to a subset of the prompts from the CogVideo paper [18]. While the focus is on the text to video generation tasks, it is remarkable that Phenaki is still competitive on the more traditional video tasks despite not being developed explicitly for these tasks. We implemented Phenaki in JAX [? ] using FLAX [? ] library.
TEXT CONDITIONAL VIDEO GENERATION
Currently there is no established benchmark for evaluating text to video methods. This makes comparing Phenaki to recent methods such as NUWA [54], CogVideo [18], NUWA-Infinity [53] and video diffusion models [17] difficult.
Unless specified otherwise, we train a 1.8B parameter Phenaki model on a corpus of ∼15M textvideo pairs at 8 FPS mixed with ∼50M text-images plus ∼400M pairs of LAION-400M [41] (more details in Appendix B.3). The model used in the visualisations in this paper was trained for 1 million steps at a batch size of 512, which took less than 5 days. In this setup 80% of the training data came from the video dataset and each image dataset contributed 10%.
Qualitative evaluation: Samples from this model can be seen in Figure 3 and additional samples are provided at phenaki.github.io. We observe that there is a high degree of control over both the actors and the background dynamics in the videos. The appearance of the actors and the video style can be adjusted by the text prompt as well (e.g. a regular video, a cartoon or a pencil drawing).
On phenaki.github.io we provide examples from prompts that were provided in the CogVideo [18] demo. Since there are substantial differences between these methods it is hard to compare them on an equal footing. As an example, there are massive differences in scale: 9B parameters for CogVideo and 1.8B for our model. Additionally, the training data is different. Finally, we do not know how representative the prompts in the CogVideo demo are for the general performance of the CogVideo.
Quantative comparison: The NUWA [54] paper provided a qualitative evaluation on Kinetics-400. Since the NUWA model is only 0.9B parameters we also use a model of the same size. Our model was trained on 50% video and 50% image data in this experiment. The NUWA model finetuned on Kinetics but the Phenaki model is not: it is evaluated in a zero shot setting. The results in Table 1 show that Phenaki achieves comparable generation quality, in a zero-shot setting, compared to previous text to video methods that were actually trained or finetuned on this dataset.
On the importance of joint text-to-image and text-to-video training While there are some textvideo datasets, text-image datasets dominate the internet in terms of quality and quantity [30]. Consequently, there is simply not enough video data available to cover all the concepts present in textimage datasets. For example using only our video data, concepts such as pencil drawings or different painting styles cannot be learned. To be able to learn a model that can combine video dynamics with these additional concepts we have to combine training on image and video data. In Table 2, we evaluate the performance of using different ratios of video and images. We start with data splits of only video, and vary the ratio of image and video datasets up to using 50% image and 50% video datasets. In our results, we find that there is a trade-off in performance between models trained with only video video (i.e., significantly better FVD), and models trained with more image data (i.e., better text-video and text-image alignment, and significantly better FID in image datasets). On phenaki.github.io we show samples from different models side by side where this trade-off between control over the content and the quality of the dynamics can be seen. We believe that the tradeoff between concepts and dynamics will be improved as the quality and size of text-video datasets increases in the future.
TEXT-IMAGE CONDITIONAL VIDEO GENERATION
Given that Phenaki can be conditioned on both still images and text, an interesting setup is to animate existing images given a text prompt. For this experiment, we use the same model from Section 3.1 but conditioned on unseen pictures (captured with our phones from local subjects) and a related prompt. As it can be seen in Figure 4 the model can generate coherent videos starting from the given images, while following the given prompts.
VISUAL STORY TELLING BY DYNAMIC TEXT INPUTS
A notable and useful feature of Phenaki is that it is auto-regressive in time. This allows for generating long videos, while the prompt changes over time. Time variable prompts can be thought of as a story; a narration of the entire video where each prompt corresponds to a scene from the video. This allows for creating dynamically changing scenes. To the best our knowledge, this paper is the first work to generate such videos. An example of this can be seen in Fig. 1 and on phenaki.github.io. The way it works is that we generate a video with the first prompt and then extend it in time by conditioning a possibly new prompt and on the last N , typically 5, previously generated frames.
VIDEO ENCODING
To evaluate the video encoding and reconstruction performance of C-ViViT , we use the Momentsin-Time (MiT) [29] dataset. MiT contains ∼802K training, ∼33K validation and ∼67K test videos at 25 FPS. The MiT dataset, in contrast to other publicly available video datasets, is a high quality balanced dataset with high coverage and density of verbs depicting moments of a few seconds [29]. We compare C-ViViT against per-frame image based encoder-decoders that have been used as video quantizers for conditional video generation [57,54,18,54,18,52]: a ViT [58] and a convolutional VQ-GAN [12]. The experimental details can be found in the Appendix B.1. As demonstrated in Table 3, we evaluate the video reconstruction quality using FID [15] and FVD [44]. Both FID and FVD compare the distribution of generated videos (or images) to the ground truth distribution. The FID ignores temporal coherency, while the FVD measures how well the spatio-temporal dynamics of the videos are reconstructed. Results in Table 3 show that perframe image based methods slightly outperform our video method (indicated by marginally higher FID of C-ViViT ), however, they do poorly at modeling the spatio-temporal dynamics in video (significantly lower FVD of C-ViViT ). This is expected as C-ViViT has spatio-temporal connections between patches in each frame, allowing space and time to be modeled together. In addition, C-ViViT compresses the video into fewer tokens per video compared to the image based baselines. This is crucial as the number of tokens drastically impacts the computational cost of the transformer in downstream tasks. Furthermore, C-ViViT tokens are auto-regressive in time which enables variable length videos to be modeled with the same encoder which is important for video extrapolation conditioned on previously generated frames.
IMAGE CONDITIONAL VIDEO GENERATION A.K.A VIDEO PREDICTION
To evaluate the learnt video representation of C-ViViT beyond reconstruction, we test it on the task of frame-conditioned video generation, also commonly known as video prediction [3]. In this experiment, we test Phenaki on BAIR Robot Pushing benchmark [11] where the task is to generate 15 frames conditioned on a given single frame. For open domain videos, we test Phenaki on Kinetics-600 [7] where the task is to predict 11 frames given 5 frames. More details about these experiments can be found in Appendix B.2. Tables 4 and 5 show the results of these experiments. Note that Table 4. Video prediction on Kinetics-600 [7]. While
Phenaki is not designed for video prediction it achieves comparable results with SOTA video prediction models.
Method FVD ↓ Video Transformer [51] 170.0 ± 5.00 CogVideo [18] 109.2 DVD-GAN-FP [9] 69.1 ± 0.78 Video VQ-VAE [49] 64.3 ± 2.04 CCVS [28] 55.0 ± 1.00 TrIVD-GAN-FP [27] 25.7 ± 0.66 Transframer [31] 25.4 RaMViD [19] 16.5 Video Diffusion [17] 16.2 ± 0.34 Phenaki (Ours) 36.4 ± 0.19 Table 5. Video prediction on BAIR [11].
Method FVD ↓ DVD-GAN [9] 109.8 VideoGPT [55] 103.3 TrIVD-GAN [27] 103.3 Transframer [31] 100.0 HARP [57] 99.3 CCVS [28] 99.0 Video Transformer [51] 94.0 FitVid [3] 93.6 MCVD [47] 89.5 NUWA [54] 86.9 RaMViD [19] 84.2 Phenaki (Ours) 97.0
Phenaki is not specifically designed for video prediction, therefore, it lacks components such as skip connections in U-Nets which are known to improve the performance for video prediction methods [10,46,3]. Nevertheless, our method is competitive on these benchmarks with SOTA video prediction methods. Overall, these experiments show that Phenaki is strong at modeling dynamics of the videos which is required for generating coherent videos from text.
RELATED WORKS
This paper is closely related to auto-regressive methods for text conditioned image and video generation. DALL-E [34] translates text tokens to discrete image embeddings learnt using a VQVAE [45]. Parti [59] has a similar architecture but can generate higher quality images by predicting tokens from a ViT-VQGAN [58] using a 21B parameters transformer. Similar architectures have been used for generating videos as well. GODIVA [52] uses a transformer to map text tokens to video tokens from a image based VQVAE. Given the large number of tokens from multiple frames, GODIVA relied on a local-attention mechanism. Similarly, NUWA [54] and NUWA-Infinity [53] both employ auto-regressive architectures to generate videos and images from text. NUWA generates fixed size outputs, while NUWA-Infinity introduces a second layer of auto-regressive computation to support variable size videos. Likewise, CogVideo [18] argues the main reason behind low quality video generation is the scarcity of good text-video data and tried to leverage pre-trained text to images models to generate high quality video.
While Phenaki sticks to the same architecture principles, it has major differences with previous work. Most notably, NUWA, NUWA-Infinity and CogVideo treat videos as a sequence of independent images. This can lead to poor modeling of dynamics and generate motion artifacts. To combat this, NUWA-infinity used the previous frame during decoding to combat this. In Phenaki, we go further and treat videos as a temporal sequence of images which substantially decreases the number of video tokens given the redundancy in video generation, and results in a much lower training cost. The auto-regressive nature of the Phenaki also allows us to effectively condition on previous frames and generates longer videos as detailed in Section 2.
Diffusion models are another class of models which recently have been used for conditional and unconditional video generation, which we call VDM [17]. In VDM, authors proposed replacing the conventional U-Net architectures for 2D image modeling with a 3D space-time model to run the diffusion process directly on pixels. While this approach provides an effective formulation for modeling videos, it is limited to fixed size videos. To address this issue, VDM provides an autoregressive extension, which allows the model to generate longer videos but it is typically impractical due to high sampling time of diffusion models.
Text conditional video generation is a relatively new field of research, nonetheless, image conditional video generation, commonly known as video prediction, and unconditional video generation have been studied more comprehensively. These papers include deterministic methods using a combination of recurrent and convolutional networks [36,42,13,50], variational based stochastic methods [2,10,46,3] and more recently by learning a discrete representation [49,33,31], auto-regressive models [51,55,28,57], diffusion models [47,14,56,19] flow based models [24], and finally adversarial based methods [48,39,43,9,40,27]. These works mostly consider limited domain (e.g. robotic videos) prediction/generation, or short fixed size clips. Section 3 provides comparison with some of these models.
CONCLUSION
We introduced Phenaki, a model which is capable of generating variable length videos conditioned on a sequence of open domain text prompts. Phenaki uses C-ViViT as video encoder. C-ViViT is a new model which provides temporal-spatial compression while being auto-regressive in time. The C-ViViT model is a crucial part of Phenaki that allows it to generate variable length videos. We demonstrate how joint training on images and videos can improve the generation quality, and diversity, given the existence of much larger image-text dataset with order of magnitude more samples. The Phenaki model achieves good performance on video prediction, it can be used as to generate long videos conditioned on a text prompt. Additionally it is able to condition on both text and a starting frame. Finally, Phenaki is not limited to generating a video depicting a single concept or caption. It is actually able to generate longer coherent video stories based on a sequence of text prompts. The more complex narratives it can visualize demonstrate how this can become a great creative tool for story telling.
ETHICS STATEMENT
While we have not explored potential downstream applications of the generative models described in this work, we believe Phenaki can have a positive impact in a variety of creative settings. In general, many of the samples from the model will not perfectly correspond to the input caption or the user's intent; however, the end-user is likely to gain considerable time savings even if only one of the generated samples aligns with their intent. We thus foresee Phenaki being useful in eventually empowering users to accelerate their creativity, especially since the model can so quickly generate videos. Phenaki and similar models will be part of an ever-broad toolset for artists and non-artists alike, providing new and exciting ways to express creativity.
The flip-side of this acceleration and ease-of-use is the potential for harmful impact, as with many of the prior or concurrent work in generative modeling. An easy-to-use system like Phenaki can be repurposed for generating maliciously fake content and enable spreading of such content much easier. While the quality of the videos generated by Phenaki is not yet indistinguishable from real videos, getting to that bar for a specific set of samples is within the realm of possibility, even today. This can be particularly harmful if Phenaki is to be used to generate videos of someone without their consent and knowledge.
Like DALLE-2 [35], Imagen [38], Parti [59] and others, Phenaki is trained on a collection of datasets that is known to encode a number of undesirable biases. LAION-400M [41] specifically has a variety of issues regarding violence, pornography, gore. While our primary image and video datasets have minimal traits like this, we did incorporate LAION-400M into our training and observed better results. In a currently training version of Phenaki, we use a set of datasets that minimizes such problems.
Taken together, these issues contribute to our decision not to release the underlying models, code, data or interactive demo at this time. Before we can do that, we want to focus our efforts on better understanding of data, prompt and output filtering. We would also like to more explicitly measure the biases encoded in the outputs of Phenaki, so that we can further mitigate them actively, either in the data, models or pre/post-processing steps.
ACKNOWLEDGMENTS
We would like to thank Niki Parmar for initial discussions. Special thanks to Gabriel Bender and Thang Luong for reviewing the paper and providing constructive feedback. We appreciate the efforts of Kevin Murphy and David Fleet for advising the project and providing feedback throughout. We are grateful to Evan Rapoport, Douglas Eck and Zoubin Ghahramani for supporting this work in a variety of ways. The decoder architecture for all models is the same as the encoder but in reverse to put the latent embeddings back to image space. The VQ objective is trained with commitment loss of β = 0.25 and codebook size of 8192. The discriminator architecture is the StyleGAN [21] discriminator with blur resample, and channel multiplier of 1.
B.1.2 TRAINING
We train all encoder-decoder baselines and with StyleGAN [21] discriminators with a batch size of 128 using Adam optimizer [23] with β 1 = 0.9 and β 2 = 0.99. We use a linear learning rate warmup to a peak value of 1 × 10 −4 over 100, 000 steps and then decaying over the remaining 900, 000 steps with a cosine schedule, and use a decoupled weight decay [26] We use a similar setup as in Section B.1, but the video tokenization step is done over 4 × 4 spatial patches on the first image and 2 × 4 × 4 spatio-temporal patches in the rest of the video. The spatial encoder consists of 8 layers and the temporal encoder consists of 6 layers.
B.2.2 KINETICS-600 C-VIVIT ARCHITECTURE
We use a similar setup as in Section B.2.1, but both the spatial encoder and temporal encoder consist of 8 layers.
B.2.3 MASKGIT ARCHITECTURE
To perform video prediction in latent space in the BAIR Robot Push and Kinetics-600 datasets, we use an unconditional transformer architecture consisting of 24 layers, 768 hidden units, 16 attention heads, dropout and attention dropout rate of 0.1, 3072 mlp hidden units.
B.2.4 TRAINING AND INFERENCE
As described in Table 7, we train C-ViViT with the same optimizer setup as in Sec B.1, but we do not downsample the FPS of any of the datasets in this section for fair comparison with the video prediction baselines. We train MaskGIT on the video tokens extracted using C-ViViT in an unconditional setting, that is, we do not assume frames or text inputs to be given. During training, we use the Adam [23] optimizer with β 1 = 0.9 and β 2 = 0.99. We use a linear learning rate warmup up to a peak value of 1 × 10 −4 over 10, 000 steps, and constant learning rate schedule for ∼2M steps. At inference time, we initialize MaskGIT given a number of input frames, and predict the rest of the frames depending on the dataset on which we evaluate.
B.3 TEXT CONDITIONAL VIDEO GENERATION
B.3.1 ARCHITECTURE
In our text conditional video generation, we use the same C-ViViT architecture and training described in Section B.1. To train MaskGIT, we include a text conditioning in the form of T5X embeddings [37] which are used as input through the use of cross attention with the video tokens. We reduce the number of parameters of our base model for fairness in the quantitative comparisons against NUWA. We use λ = 12, 48 MaskGIT iterations, and temperature of 8.0.
Figure 2 .
2The architecture of Phenaki. Left: C-ViViT encoder architecture.
Figure 3 .
3Text conditional video generation. Each row shows selected frames from a video generated given the prompt. The model is trained on a mix of images and videos. The video dataset does not include any stylized videos such as pencil drawings, however, the image dataset does. The model can generalize from still images to videos. This figure also demonstrate the capability of the model in generating new unseen compositions. Full videos are available at phenaki.github.io.
Figure 4 .
4Animating images conditioned on a prompt. Each row demonstrates multiple frames of a generated video conditioned on a given first frame as well as a given text prompt. The first frames are new (captured by author's phone) and not observed during the training. The model animates the given image while following the prompt. Full videos are available at phenaki.github.io.
Figure 5 .
5Another example of story conditional video generation. Full videos are available at phenaki.github.io.
and Fig. 5.Empty
Tokens
Tokens
Patch
Emb
Patch
Emb
Patch
Emb
Spatial
Transformer
Spatial
Transformer
Spatial
Transformer
Causal
Transformer
Causal
Transformer
Causal
Transformer
...
...
...
...
C-ViViT
Encoder
T5X
...
Transformer
Random Masking
...
...
Video
Tokens
Tokens
Masked
Reconstructed
...
Transformer
...
Shift Time
...
...
Transformer
...
T5X
T5X
...
"Next Prompt"
Tokens
Tokens
Predicted
Frozne Past
Predicted
Future Tokens
C-ViViT Encoder
Training Transformer
Video Generation
Token
Masked/Empty
Token
Transformer
Frozen Model
Linear
Embedding
Operation
"1st Prompt"
"Prompt"
Discretize
Discretize
Discretize
...
Table 1 .
1Text to video comparisons on Kinetics-400 [22].Table 2. Text to video and text to image results highlighting the importance of image datasets in video models. Text-to-image evaluation is done on ∼40K images of LAION-400M [41]. Data Split Text to Video Text to Image Vid% / Img% CLIP ↑ FID ↓ FVD ↓ CLIP ↑ FID ↓ 100% / 0% 0.298 19.2 168.9 0.240 53.9 80% / 20% 0.303 21.4 198.4 0.289 29.4 50% / 50% 0.302 21.4 239.7 0.287 30.5Method
FID
Image
↓
FID
Video
↓
T2V [25]
82.13 14.65
SC [5]
33.51
7.34
TFGAN [5]
31.76
7.19
NUWA
28.46
7.05
Phenaki [0-Shot] 37.74
3.84
Table 3 .
3Video reconstruction results on Moments-in-Time. The number of tokens is computed for 10 frames with the exception of C-ViViT which is for 11, due to the isolated initial frame.Method
FID ↓ FVD ↓ Number of Tokens ↓
Conv VQ-GAN [12]
7.5
306.1
2560
Conv VQ-GAN + Video loss
13.7
346.5
2560
ViT VQ-GAN [58]
3.4
166.6
2560
ViT VQ-GAN + Video loss
3.8
173.1
2560
C-ViViT VQ-GAN (Ours)
4.5
65.78
1536
Tim Salimans and Chitwan Saharia helped us with brainstorming and coming up with shared benchmarks. Jason Baldridge was instrumental for bouncing ideas. Alex Rizkowsky was very helpful in keeping things organized, while Erica Moreira and Victor Gomes ensured smooth resourcing for the project. Sarah Laszlo and Kathy Meier-Hellstern have greatly helped us incorporate important responsible AI practices into this project, which we are immensely grateful for. Finally, Blake Hechtman and Anselm Levskaya were generous in helping us debug a number of JAX issues.A HYPER-PARAMETERS Symbol Value Description t x , w x , h x , c x 11, 128, 128, 3 Video dimensions t p , w p , h p , c pTable 6. Hyperparamters used for C-ViViT architecture and optimizer.Table 7. Hyperparamters used for MaskGIT architecture and optimizer. B DETAILS OF EXPERIMENTS B.1 VIDEO QUANTIZATION B.1.1 NETWORK ARCHITECTURE All encoder-decoder baselines have approximately 50M parameters. The Convolutional baseline encoder architecture consists of 5 convolutional blocks with channel multipliers of [1, 1, 2, 2, 4], 2 residual layers and 128 hidden units per block, and embedding dimension of 256. The ViT baseline encoder architecture consists of an image patchification step over non-overlapping 8 × 8 spatial patches which are linearly transformed into image tokens. Next, we follow with 8 transformer layers with 512 hidden units, 8 attention heads, 2048 mlp units, and embedding dimension of 32. C-ViViT encoder architecture patches the first frame to non-overlapping 8 × 8 patches, and then the rest of the frames to non-overlapping 2 × 8 × 8 spatio-temporal patches which are linearly transformed into video embeddings. Next, C-ViViT encoder architecture consists of 4 spatial and 4 temporal transformer layers with 512 hidden units, 8 attention heads, 2048 mlp hidden units, and embedding dimension of 32.2, 8, 8, 3
Patches dimensions (all frames except the first one)
t z , w z , h z
6, 16, 16
Video tokens dimension (before linear projection)
h z
512
Hidden size in the transformer layer
d z
32
Embedding dimension (after linear projection)
−
4
Number of layers for spatial transformer
−
4
Number of layers for temporal transformer
−
2048
MLP size
|E|
8192
Codebook size
-
AdamW
Optimizer
β 1
0.9
first moment of gradient
β 2
0.99
second moment of gradient
-
1e-4
Learning rate
-
1e-4
Weight decay
-
Cosine decay Learning rate scheduler
-
1M
Target number of training steps for learning rate scheduler
-
100K
Warmup steps
-
10
Gradient clipping magnitude
-
1028
Batch size
Symbol
Value
Description
|z|
1536
Sequence Length
-
24
Number of layer
-
2048
Embedding dimension
-
8192
MLP dimension
-
32
Number of heads
-
AdamW
Optimizer
β 1
0.9
first moment of gradient
β 2
0.99
second moment of gradient
-
1e-4
Learning rate
-
1e-4
Weight decay
-
Cosine decay Learning rate scheduler
-
4M
Target number of training steps for learning rate scheduler
-
10K
Warmup steps
-
10
Gradient clipping magnitude
-
512
Batch size
of 1 × 10 −4 for the encoder-decoder and discriminator. To capture longer time horizons during training and better evaluate temporal coherence, we downsample the MiT dataset from 25 FPS to 6 FPS and evaluate on videos of 10 frames at spatial resolution of 128 × 128. B.2 IMAGE CONDITIONAL VIDEO GENERATION B.2.1 BAIR ROBOT PUSH C-VIVIT ARCHITECTURE
The MaskGIT architecture used against NUWA consists of 20 transformer layers with 1536 hidden units, 24 attention heads, and 6144 MLP hidden units, resulting in 0.9B parameters similar to NUWA. For the main experiments in this paper, we use a larger architecture that consists of consists of 24 transformer layers with 2048 hidden units, 32 attention heads, and 8192 mlp hidden units, resulting in 1.8B parameters.B.3.2 TRAINING AND INFERENCEFor all our text-conditional video generation, we use the training parametersTable 7. B.3.3 INFERENCE PARAMETERS AGAINST NUWA We use λ = 0.1, 12 MaskGIT iterations, and temperature of 4.0. B.3.4 INFERENCE PARAMETERS FOR ABLATION OF IMAGE AND VIDEO DATA FOR TRAINING. We use λ = 6, 24 MaskGIT iterations, and temperature of 4.0. B.3.5 INFERENCE PARAMETERS FOR ALL VIDEOS IN THE PAPER.
Vivit: A video vision transformer. Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, Cordelia Schmid, ICCV. 2021Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. Vivit: A video vision transformer. In ICCV, 2021.
Stochastic variational video prediction. ICLR. Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, H Roy, Sergey Campbell, Levine, Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, and Sergey Levine. Stochastic variational video prediction. ICLR, 2018.
Mohammad Babaeizadeh, Mohammad Taghi Saffar, Suraj Nair, Sergey Levine, Chelsea Finn, Dumitru Erhan, Fitvid, arXiv:2106.13195Overfitting in pixel-level video prediction. arXiv preprintMohammad Babaeizadeh, Mohammad Taghi Saffar, Suraj Nair, Sergey Levine, Chelsea Finn, and Dumitru Erhan. Fitvid: Overfitting in pixel-level video prediction. arXiv preprint arXiv:2106.13195, 2020.
Frozen in time: A joint video and image encoder for end-to-end retrieval. Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionMax Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1728-1738, 2021.
Conditional gan with discriminative filter generation for text-to-video synthesis. Yogesh Balaji, Bing Martin Renqiang Min, Rama Bai, Hans Peter Chellappa, Graf, IJCAI. Yogesh Balaji, Martin Renqiang Min, Bing Bai, Rama Chellappa, and Hans Peter Graf. Con- ditional gan with discriminative filter generation for text-to-video synthesis. In IJCAI, 2019.
Quo vadis, action recognition? a new model and the kinetics dataset. Joao Carreira, Andrew Zisserman, CVPR. Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
A short note about kinetics-600. Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, Andrew Zisserman, Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600, 2018.
Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T Freeman, Maskgit, arXiv:2202.04200Masked generative image transformer. arXiv preprintHuiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T. Freeman. Maskgit: Masked generative image transformer. arXiv preprint arXiv:2202.04200, 2022.
Adversarial video generation on complex datasets. Aidan Clark, Jeff Donahue, Karen Simonyan, arXiv:1907.06571arXiv preprintAidan Clark, Jeff Donahue, and Karen Simonyan. Adversarial video generation on complex datasets. arXiv preprint arXiv:1907.06571, 2019.
Stochastic video generation with a learned prior. Emily Denton, Rob Fergus, Proceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning80Emily Denton and Rob Fergus. Stochastic video generation with a learned prior. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1174-1183, 2018.
Self-supervised visual planning with temporal skip connections. Frederik Ebert, Chelsea Finn, Alex X Lee, Sergey Levine, Frederik Ebert, Chelsea Finn, Alex X. Lee, and Sergey Levine. Self-supervised visual planning with temporal skip connections, 2017.
Taming transformers for high-resolution image synthesis. Patrick Esser, Robin Rombach, Björn Ommer, Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis, 2020.
Unsupervised learning for physical interaction through video prediction. Chelsea Finn, Ian Goodfellow, Sergey Levine, Advances in neural information processing systems. Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical inter- action through video prediction. In Advances in neural information processing systems, pages 64-72, 2016.
Flexible diffusion modeling of long videos. William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, Frank Wood, arXiv:2205.11495arXiv preprintWilliam Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, and Frank Wood. Flexible diffusion modeling of long videos. arXiv preprint arXiv:2205.11495, 2022.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, 30Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
Classifier-free diffusion guidance. Jonathan Ho, Tim Salimans, Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance, 2021.
. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, David J Fleet, arXiv:2204.03458Video diffusion models. arXiv preprintJonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. arXiv preprint arXiv:2204.03458, 2022.
Cogvideo: Large-scale pretraining for text-to-video generation via transformers. Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang, arXiv:2205.15868arXiv preprintWenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022.
Diffusion models for video prediction and infilling. Tobias Höppe, Arash Mehrjou, Stefan Bauer, Didrik Nielsen, Andrea Dittadi, arXiv:2206.07696arXiv preprintTobias Höppe, Arash Mehrjou, Stefan Bauer, Didrik Nielsen, and Andrea Dittadi. Diffusion models for video prediction and infilling. arXiv preprint arXiv:2206.07696, 2022.
Justin Johnson, Alexandre Alahi, Li Fei-Fei, arXiv:1603.08155Perceptual losses for real-time style transfer and super-resolution. arXiv preprintJustin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.
Analyzing and improving the image quality of stylegan. Jtero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, CVPR. JTero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In CVPR, 2020.
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman, The kinetics human action video dataset. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijaya- narasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and An- drew Zisserman. The kinetics human action video dataset, 2017.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, arXiv:1903.01434Laurent Dinh, and Durk Kingma. Videoflow: A flow-based generative model for video. arXiv preprintManoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Lau- rent Dinh, and Durk Kingma. Videoflow: A flow-based generative model for video. arXiv preprint arXiv:1903.01434, 2019.
Video generation from text. Yitong Li, Martin Min, Dinghan Shen, David Carlson, Lawrence Carin, AAAI. Yitong Li, Martin Min, Dinghan Shen, David Carlson, and Lawrence Carin. Video generation from text. In AAAI, 2018.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, ICLR. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019.
Albin Cassirer, and Karen Simonyan. Transformation-based adversarial video prediction on large-scale data. Pauline Luc, Aidan Clark, Sander Dieleman, arXiv:2003.04035Yotam DoronarXiv preprintDiego de Las CasasPauline Luc, Aidan Clark, Sander Dieleman, Diego de Las Casas, Yotam Doron, Albin Cas- sirer, and Karen Simonyan. Transformation-based adversarial video prediction on large-scale data. arXiv preprint arXiv:2003.04035, 2019.
CCVS: Context-aware controllable video synthesis. Guillaume Le Moing, Jean Ponce, Cordelia Schmid, NeurIPS. 2021Guillaume Le Moing, Jean Ponce, and Cordelia Schmid. CCVS: Context-aware controllable video synthesis. In NeurIPS, 2021.
Moments in time dataset: one million videos for event understanding. Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfruend, Carl Vondrick, IEEE Transactions on Pattern Analysis and Machine Intelligence. Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfruend, Carl Vondrick, et al. Moments in time dataset: one million videos for event understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
Learning audio-video modalities from image captions. Arsha Nagrani, Paul Hongsuck Seo, Bryan Andrew Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid, ECCV. 2022Arsha Nagrani, Paul Hongsuck Seo, Bryan Andrew Seybold, Anja Hauth, Santiago Manen, Chen Sun, and Cordelia Schmid. Learning audio-video modalities from image captions. In ECCV, 2022.
Transframer: Arbitrary frame prediction with generative models. Charlie Nash, João Carreira, Jacob Walker, Iain Barr, Andrew Jaegle, Mateusz Malinowski, Peter Battaglia, arXiv:2203.09494arXiv preprintCharlie Nash, João Carreira, Jacob Walker, Iain Barr, Andrew Jaegle, Mateusz Malinowski, and Peter Battaglia. Transframer: Arbitrary frame prediction with generative models. arXiv preprint arXiv:2203.09494, 2019.
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mc-Grew, Ilya Sutskever, Mark Chen, arXiv:2112.10741arXiv preprintAlex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mc- Grew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
. Ruslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, Evgeny Burnaev, arXiv:2006.10704Latent video transformer. arXiv preprintRuslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, and Evgeny Burnaev. Latent video transformer. arXiv preprint arXiv:2006.10704, 2020.
Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, International Conference on Machine Learning. PMLRAditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821-8831. PMLR, 2021.
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Video (language) modeling: a baseline for generative models of natural videos. Marcaurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, Sumit Chopra, arXiv:1412.6604arXiv preprintMarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014.
Afroz Mohiuddin, et al. Scaling up models and data with t5x and seqio. Adam Roberts, Hyung Won, Anselm Chung, Gaurav Levskaya, James Mishra, Daniel Bradbury, Sharan Andor, Brian Narang, Colin Lester, Gaffney, arXiv:2203.17189arXiv preprintAdam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, et al. Scal- ing up models and data with t5x and seqio. arXiv preprint arXiv:2203.17189, 2022.
Photorealistic text-to-image diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, ; S Sara Mahdavi, Rapha Gontijo Lopes, arXiv:2205.11487Burcu Karagol Ayan. arXiv preprintSeyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
Temporal generative adversarial nets with singular value clipping. Masaki Saito, Eiichi Matsumoto, Shunta Saito, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionMasaki Saito, Eiichi Matsumoto, and Shunta Saito. Temporal generative adversarial nets with singular value clipping. In Proceedings of the IEEE international conference on computer vision, pages 2830-2839, 2017.
Train sparsely, generate densely: Memory-efficient unsupervised training of high-resolution temporal gan. Masaki Saito, Shunta Saito, Masanori Koyama, Sosuke Kobayashi, International Journal of Computer Vision. 12810Masaki Saito, Shunta Saito, Masanori Koyama, and Sosuke Kobayashi. Train sparsely, gener- ate densely: Memory-efficient unsupervised training of high-resolution temporal gan. Interna- tional Journal of Computer Vision, 128(10):2586-2606, 2020.
Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki, arXiv:2111.02114arXiv preprintChristoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
Unsupervised learning of video representations using lstms. Nitish Srivastava, Elman Mansimov, Ruslan Salakhudinov, International Conference on Machine Learning. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In International Conference on Machine Learning, 2015.
Mocogan: Decomposing motion and content for video generation. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, Jan Kautz, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. Mocogan: Decomposing motion and content for video generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1526-1535, 2018.
Towards accurate generative models of video: A new metric & challenges. Thomas Unterthiner, Karol Sjoerd Van Steenkiste, Raphael Kurach, Marinier, arXiv:1812.01717arXiv preprintMarcin Michalski, and Sylvain GellyThomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michal- ski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & chal- lenges. arXiv preprint arXiv:1812.01717, 2018.
Neural discrete representation learning. Aaron Van Den Oord, Oriol Vinyals, Koray Kavukcuoglu, NeurIPS. Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In NeurIPS, 2018.
High fidelity video prediction with large stochastic recurrent neural networks. Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, V Quoc, Honglak Le, Lee, Advances in Neural Information Processing Systems. Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V Le, and Honglak Lee. High fidelity video prediction with large stochastic recurrent neural networks. In Ad- vances in Neural Information Processing Systems, pages 81-91, 2019.
Mcvd: Masked conditional video diffusion for prediction, generation, and interpolation. Vikram Voleti, Alexia Jolicoeur-Martineau, Christopher Pal, arXiv:2205.09853arXiv preprintVikram Voleti, Alexia Jolicoeur-Martineau, and Christopher Pal. Mcvd: Masked conditional video diffusion for prediction, generation, and interpolation. arXiv preprint arXiv:2205.09853, 2022.
Generating videos with scene dynamics. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, arXiv:1609.02612arXiv preprintCarl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dy- namics. arXiv preprint arXiv:1609.02612, 2016.
. Jacob Walker, Ali Razavi, Aäron Van Den Oord, arXiv:2103.01950Predicting video with vqvae. arXiv preprintJacob Walker, Ali Razavi, and Aäron van den Oord. Predicting video with vqvae. arXiv preprint arXiv:2103.01950, 2019.
Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. Advances in neural information processing systems. Yunbo Wang, Mingsheng Long, Jianmin Wang, Zhifeng Gao, Philip S Yu, 30Yunbo Wang, Mingsheng Long, Jianmin Wang, Zhifeng Gao, and Philip S Yu. Predrnn: Re- current neural networks for predictive learning using spatiotemporal lstms. Advances in neural information processing systems, 30, 2017.
Scaling autoregressive video models. Dirk Weissenborn, Oscar Täckström, Jakob Uszkoreit, ICLR. Dirk Weissenborn, Oscar Täckström, and Jakob Uszkoreit. Scaling autoregressive video mod- els. In ICLR, 2020.
Chenfei Wu, Lun Huang, Qianxi Zhang, Binyang Li, Lei Ji, Fan Yang, Guillermo Sapiro, Nan Duan Godiva, arXiv:2104.14806Generating open-domain videos from natural descriptions. arXiv preprintChenfei Wu, Lun Huang, Qianxi Zhang, Binyang Li, Lei Ji, Fan Yang, Guillermo Sapiro, and Nan Duan. Godiva: Generating open-domain videos from natural descriptions. arXiv preprint arXiv:2104.14806, 2021.
Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zicheng Liu, arXiv:2207.09814Yuejian Fang, and Nan Duan. Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. arXiv preprintChenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zicheng Liu, Yuejian Fang, and Nan Duan. Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. arXiv preprint arXiv:2207.09814, 2022.
NÜwa: Visual synthesis pre-training for neural visual world creation. Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, Nan Duan, ECCV. 2022Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, and Nan Duan. NÜwa: Visual synthesis pre-training for neural visual world creation. In ECCV, 2022.
Videogpt: Video generation using vq-vae and transformers. Wilson Yan, Yunzhi Zhang, Pieter Abbeel, Aravind Srinivas, arXiv:2104.10157arXiv preprintWilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2019.
Diffusion probabilistic modeling for video generation. Ruihan Yang, Prakhar Srivastava, Stephan Mandt, arXiv:2203.09481arXiv preprintRuihan Yang, Prakhar Srivastava, and Stephan Mandt. Diffusion probabilistic modeling for video generation. arXiv preprint arXiv:2203.09481, 2022.
Harp: Autoregressive latent video prediction with high-fidelity image generator. Fangchen Liu Stephen James Pieter Abbeel Younggyo Seo, Kimin Lee, arXiv:2209.07143arXiv preprintFangchen Liu Stephen James Pieter Abbeel Younggyo Seo, Kimin Lee. Harp: Autoregressive latent video prediction with high-fidelity image generator. arXiv preprint arXiv:2209.07143, 2022.
Vector-quantized image modeling with improved vqgan. Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, Yonghui Wu, ICLR. 2022Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. In ICLR, 2022.
Scaling autoregressive models for content-rich text-to-image generation. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Ben Burcu Karagol Ayan, Wei Hutchinson, Zarana Han, Xin Parekh, Han Li, Jason Zhang, Yonghui Baldridge, Wu, arXiv:2206.10789arXiv preprintJiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Va- sudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022.
Scaling vision transformers. Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, Lucas Beyer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision trans- formers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pages 12104-12113, 2022.
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, CVPRRichard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, , and Oliver Wang. The unrea- sonable effectiveness of deep features as a perceptual metric. CVPR, 2018. |
13,002,849 | MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS | Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem. * Authors contributed equally. | [] | MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS
† Tong
Montreal Institute for Learning Algorithms
Université de Montréal
H3T 1J4MontréalQCCanada
Department of Computing
School of Computer Science
The Hong Kong Polytechnic University
University Of WaterlooN2L 3G1Hong Kong, WaterlooONCanada
Che
Yanran Li
Montreal Institute for Learning Algorithms
Université de Montréal
H3T 1J4MontréalQCCanada
Athul Paul Jacob [email protected]
Montreal Institute for Learning Algorithms
Université de Montréal
H3T 1J4MontréalQCCanada
Yoshua Bengio [email protected]
Department of Computing
School of Computer Science
The Hong Kong Polytechnic University
University Of WaterlooN2L 3G1Hong Kong, WaterlooONCanada
Wenjie Li
David R Cheriton
MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS
Published as a conference paper at ICLR 2017
Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem. * Authors contributed equally.
INTRODUCTION
Generative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potential on various tasks, such as image generation, image super-resolution, 3D object generation, and video prediction (Radford et al., 2015;Ledig et al., 2016;Sønderby et al., 2016;Nguyen et al., 2016;Wu et al., 2016;Mathieu et al., 2015). The objective is to train a parametrized function (the generator) which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to that of the data generating distribution. The basic scheme of the GAN training procedure is to train a discriminator which assigns higher probabilities to real data samples and lower probabilities to generated data samples, while simultaneously trying to move the generated samples towards the real data manifold using the gradient information provided by the discriminator. In a typical setting, the generator and the discriminator are represented by deep neural networks.
Despite their success, GANs are generally considered as very hard to train due to training instability and sensitivity to hyper-parameters. On the other hand, a common failure pattern observed while training GANs is the collapsing of large volumes of probability mass onto a few modes. Namely, although the generators produce meaningful samples, these samples are often from just a few modes (small regions of high probability under the data distribution). Behind this phenomenon is the missing modes problem, which is widely conceived as a major problem for training GANs: many modes of the data generating distribution are not at all represented in the generated samples, yielding a much lower entropy distribution, with less variety than the data generating distribution. This issue has been the subject of several recent papers proposing several tricks and new architectures to stabilize GAN's training and encourage its samples' diversity. However, we argue that a general cause behind these problems is the lack of control on the discriminator during GAN training. We would like to encourage the manifold of the samples produced by the generator to move towards that of real data, using the discriminator as a metric. However, even if we train the discriminator to distinguish between these two manifolds, we have no control over the shape of the discriminator function in between these manifolds. In fact, the shape of the discriminator function in the data Published as a conference paper at ICLR 2017 space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt the training of GANs (Figure 1). To remedy this problem, we propose a novel regularizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, we want the generator to take advantage of other similarity metrics with much more predictable behavior, such as the L 2 norm. Differentiating these similarity metrics will provide us with more stable gradients to train our generator. Combining this idea with an approach meant to penalize the missing modes, we propose a family of additional regularizers for the GAN objective. We then design a set of metrics to evaluate the generated samples in terms of both the diversity of modes and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones.
Regularizers usually bring a trade-off between model variance and bias. Our results have shown that, when correctly applied, our regularizers can dramatically reduce model variance, stabilize the training, and fix the missing mode problem all at once, with positive or at the least no negative effects on the generated samples. We also discuss a variant of the regularized GAN algorithm, which can even improve sample quality as compared to the DCGAN baseline.
RELATED WORK
The GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator and the discriminator are defined by deep neural networks.
In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globally incoherent images on various datasets. Mirza & Osindero (2014) enlarges GAN's representation capacity by introducing an extra vector to allow the generator to produce samples conditioned on other beneficial information. Motivated from this, several conditional variants of GAN has been applied to a wide range of tasks, including image prediction from a normal map Wang & Gupta (2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-time image manipulation , temporal image generation Zhou & Berg (2016); Saito & Matsumoto (2016); Vondrick et al. (2016), texture synthesis, style transfer, and video stylization Li & Wand (2016).
Researchers also aim at stretching GAN's limit to generate higher-resolution, photo-realistic images. Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images of high resolution. At each level of their LAPGAN, both the generator and the discriminator are convolutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a class of deep convolutional generative adversarial networks which has led to significant improvements on unsupervised image representation learning. Another line of work aimed at improving GANs are through feature learning, including features from the latent space and image space. The motivation is that features from different spaces are complementary for generating perceptual and natural-looking images. With this perspective, some researchers use distances between learned features as losses for training objectives for generative models. Larsen et al. (2015) combine a variational autoencoder objective with a GAN and utilize the learned features from the discriminator in the GANs for better image similarity metrics. It is shown that the learned distance from the discriminator is of great help for the sample visual fidelity. Recent literature have also shown impressive results on image super-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016);Sønderby et al. (2016); Nguyen et al. (2016).
Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015) provide a class of empirical architectural choices that are critical to stabilize GAN's training, it would be even better to train GANs more robustly and systematically. Salimans et al. (2016) propose feature matching technique to stabilize GAN's training. The generator is required to match the statistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).
In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in image space further improves GAN's training stability. Furthermore, some researchers make use of information in both spaces in a unified learning procedure (Dumoulin et al., 2016;Donahue et al., 2016). In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminator is trained to distinguish between two joint distributions over image and latent spaces produced either by the application of the encoder on the training data or by the application of the generator (decoder) to the latent prior. This is in contrast with the regular GAN training, in which the discriminator only attempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilize GANs by unrolling the optimization of discriminator, which can be considered as an orthogonal work with ours.
Our work is related to VAEGAN (Larsen et al., 2015) in terms of training an autoencoder or VAE jointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to generate samples whereas our autoencoder based losses serves as a regularizer to penalize missing modes and thus improving GAN's training stability and sample qualities. We demonstrate detailed differences from various aspects in Appendix D.
MODE REGULARIZERS FOR GANS
The GAN training procedure can be viewed as a non-cooperative two player game, in which the discriminator D tries to distinguish real and generated examples, while the generator G tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. Training the discriminator D can be viewed as training an evaluation metric on the sample space. Then the generator G has to take advantage of the local gradient ∇ log D(G) provided by the discriminator to improve itself, namely to move towards the data manifold.
We now take a closer look at the root cause of the instabilities while training GANs. The discriminator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014);Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold are disjoint (which is true in almost all practical situations), it is equivalent to training a characteristic function to be very close to 1 on the data manifold, and 0 on the generation manifold. In order to pass good gradient information to the generator, it is important that the trained discriminator produces stable and smooth gradients. However, since the discriminator objective does not directly depend on the behavior of the discriminator in other parts of the space, training can easily fail if the shape of the discriminator function is not as expected. As an example,Denton et al. (2015) noted a common failure pattern for training GANs which is the vanishing gradient problem, in which the discriminator D perfectly classifies real and fake examples, such that around the fake examples, D is nearly zero. In such cases, the generator will receive no gradient to improve itself. 1 Another important problem while training GANs is mode missing. In theory, if the generated data and the real data come from the same low dimensional manifold, the discriminator can help the generator distribute its probability mass, because the missing modes will not have near-0 probability under the generator and so the samples in these areas can be appropriately concentrated towards regions where D is closer to 1. However, in practice since the two manifolds are disjoint, D tends to be near 1 on all the real data samples, so large modes usually have a much higher chance of attracting the gradient of discriminator. For a typical GAN model, since all modes have similar D values, there is no reason why the generator cannot collapse to just a few major modes. In other words, since the discriminator's output is nearly 0 and 1 on fake and real data respectively, the generator is not penalized for missing modes.
GEOMETRIC METRICS REGULARIZER
Compared with the objective for the GAN generator, the optimization targets for supervised learning are more stable from an optimization point of view. The difference is clear: the optimization target for the GAN generator is a learned discriminator. While in supervised models, the optimization targets are distance functions with nice geometric properties. The latter usually provides much easier training gradients than the former, especially at the early stages of training.
Inspired by this observation, we propose to incorporate a supervised training signal as a regularizer on top of the discriminator target. Assume the generator G(z) : Z → X generates samples by sampling first from a fixed prior distribution in space Z followed by a deterministic trainable transformation G into the sample space X. Together with G, we also jointly train an encoder E(x) : X → Z. Assume d is some similarity metric in the data space, we add E x∼p d [d(x, G•E(x))] as a regularizer, where p d is the data generating distribution. The encoder itself is trained by minimizing the same reconstruction error.
In practice, there are many options for the distance measure d. For instance, the pixel-wise L 2 distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by other networks, such as a VGG classifier. (Ledig et al., 2016) The geometric intuition for this regularizer is straight-forward. We are trying to move the generated manifold to the real data manifold using gradient descent. In addition to the gradient provided by the discriminator, we can also try to match the two manifolds by other geometric distances, say, L s metric. The idea of adding an encoder is equivalent to first training a point to point mapping G(E(x)) between the two manifolds and then trying to minimize the expected distance between the points on these two manifolds.
MODE REGULARIZER
In addition to the metric regularizer, we propose a mode regularizer to further penalize missing modes. In traditional GANs, the optimization target for the generator is the empirical sum For most z, the gradient of the generator ∇ θ log D(G θ (z)) pushes the generator towards the major mode M 1 . Only when G(z) is very close to the mode M 2 can the generator get gradients to push itself towards the minor mode M 2 . However, it is possible that such z is of low or zero probability in the prior distribution p 0 .
Given this observation, consider a regularized GAN model with the metric regularizer. Assume M 0 is a minor mode of the data generating distribution. For x ∈ M 0 , we know that if G • E is a good autoencoder, G(E(x)) will be located very close to mode M 0 . Since there are sufficient training examples of mode M 0 in the training data, we add the mode regularizer E x∼p d [log D(G • E(x))] to our optimization target for the generator, to encourage G(E(x)) to move towards a nearby mode of the data generating distribution. In this way, we can achieve fair probability mass distribution across different modes.
In short, our regularized optimization target for the generator and the encoder becomes:
T G = −E z [log D(G(z))] + E x∼p d [λ 1 d(x, G • E(x)) + λ 2 log D(G • E(x))]
(1)
T E = E x∼p d [λ 1 d(x, G • E(x)) + λ 2 log D(G • E(x))](2)
MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS
On some large scale datasets, CelebA for example, the regularizers we have discussed do improve the diversity of generated samples, but the quality of samples may not be as good without carefully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized GANs, which is very stable and much easier to tune for producing good samples.
The proposed algorithm divides the training procedure of GANs into two steps: a manifold step and a diffusion step. In the manifold step, we try to match the generation manifold and the real data manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we try to distribute the probability mass on the generation manifold fairly according to the real data distribution.
An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train a discriminator D 1 which separates between the samples x and G • E(x), for x from the data, and we optimize G with respect to the regularized GAN loss E[log D 1 (G•E(x))+λd(x, G•E(x))] in order to match the two manifolds. In the diffusion step we train a discriminator D 2 between distributions G(z) and G • E(x), and we train G to maximize log D 2 (G(z)). Since these two distributions are now nearly on the same low dimensional manifold, the discriminator D 2 provides much smoother and more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 for the quality of generated samples.
EVALUATION METRICS FOR MODE MISSING
In order to estimate both the missing modes and the sample qualities in our experiments, we used several different metrics for different experiments instead of human annotators.
The inception score (Salimans et al., 2016) was considered as a good assessment for sample quality from a labelled dataset:
exp (E x KL(p(y|x)||p * (y)))(3)
Where x denotes one sample, p(y|x) is the softmax output of a trained classifier of the labels, and p * (y) is the overall label distribution of generated samples. The intuition behind this score is that a strong classifier usually has a high confidence for good samples. However, the inception score is sometimes not a good metric for our purpose. Assume a generative model that collapse to a very bad image. Although the model is very bad, it can have a perfect inception score, because p(y|x) can have a high entropy and p * (y) can have a low entropy. So instead, for labelled datasets, we propose another assessment for both visual quality and variety of samples, the MODE score:
exp (E x KL(p(y|x)||p(y)) − KL(p * (y)||p(y)))
where p(y) is the distribution of labels in the training data. According to our human evaluation experiences, the MODE score successfully measures two important aspects of generative models, i.e., variety and visual quality, in one metric.
However, in datasets without labels (LSUN) or where the labels are not sufficient to characterize every data mode (CelebA), the above metric does not work well. We instead train a third party discriminator between the real data and the generated data from the model. It is similar to the GAN discriminator but is not used to train the generator. We can view the output of the discriminator as an estimator for the quantity (See (Goodfellow et al., 2014) for proof):
D * (s) ≈ p g (s) p g (s) + p d (s)(5)
Where p g is the probability density of the generator and p d is the density of the data distribution.
To prevent D * from learning a perfect 0-1 separation of p g and p d , we inject a zero-mean Gaussian noise to the inputs when training D * . After training, we test D * on the test set T of the real dataset.
If for any test sample t ∈ T , the discrimination value D(t) is close to 1, we can conclude that the mode corresponding to t is missing. In this way, although we cannot measure exactly the number of modes that are missing, we have a good estimator of the total probability mass of all the missing modes. We perform two classes of experiments on MNIST.
EXPERIMENTS
MNIST
For the MNIST dataset, we can assume that the data generating distribution can be approximated with ten dominant modes, if we define the term "mode" here as a connected component of the data manifold.
GRID SEARCH FOR MNIST GAN MODELS
In order to systemically explore the effect of our proposed regularizers on GAN models in terms of improving stability and sample quality, we use a large scale grid search of different GAN hyper-parameters on the MNIST dataset. The grid search is based on a pair of randomly selected loss weights: λ 1 = 0.2 and λ 2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized GAN, and list the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016). Please refer to it for detailed explanations regarding these hyper-parameters.
For evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it to compute the MODE scores for the generated samples from all these models. The resulting distribution of MODE score is shown in Figure 3. Clearly, our proposed regularizer significantly improves the MODE scores and thus demonstrates its benefits on stabilizing GANs and improving sample qualities. To illustrate the effect of regularizers with different coefficients, we randomly pick an architecture and train it with different λ 1 = λ 2 . The results are shown in Figure 4.
COMPOSITIONAL MNIST DATA WITH 1000 MODES
In order to quantitatively study the effect of our regularizers on the missing modes, we concatenate three MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as a baseline model on the 1000 modes dataset. The digits on the image are sampled with different probabilities, in order to test the model's capability to preserve small modes in generation. We again use a pre-trained classifier for MNIST instead of a human to evaluate the models. The performances on the compositional experiment are measured by two metrics. #Miss represents the classifier-reported number of missing modes, which is the size of the set of numbers that the model never generates. KL stands for the KL divergence between the classifier-reported distribution of generated numbers and the distribution of numbers in the training data (as for the Inception score). The results are shown in Table 2. With the help of our proposed regularizer, both the number of missing modes and KL divergence drop dramatically among all the sets of the compositional MNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing modes problem.
CELEBA
To test the effectiveness of our proposal on harder problems, we implement an encoder for the DCGAN algorithm and train our model with different hyper-parameters together with the DCGAN baseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN in Appendix B.
MISSING MODES ESTIMATION ON CELEBA
We also employ a third party discriminator trained with injected noise as a metric for missing mode estimation. To implement this, we add noise in the input layer in the discriminator network. For each GAN model to be estimated, we independently train this noisy discriminator, as mode estimator, with the same architecture and hyper-parameters on the generated data and the training data. We then apply the mode estimator to the test data. The images which have high mode estimator outputs can be viewed as on the missing modes. The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGAN outperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models, showing its superiority on modes preserving. We also find that, although sharing the same architecture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensional noise as input. On the contrary, our regularized GAN performs more consistently.
To get a better understanding of the models' performance, we want to figure out when and where these models miss the modes. Visualizing the test images associated with missed modes is instructive. In Figure 5, the left three images are missed by all models. It is rare to see in the training data the cap in the second image and the type of background in the third, which thus can be viewed as small modes under this situation. These three images should be considered as the hardest test data for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The seven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black, and the berets are special attributes among these images, but our proposed MDGAN performs well on all of them.
QUALITATIVE EVALUATION OF GENERATED SAMPLES
After quantitative evaluation, we manually examine the generated samples by our regularized GAN to see whether the proposed regularizer has side-effects on sample quality. We compare our model with ALI (Dumoulin et al., 2016), VAEGAN (Larsen et al., 2015), and DCGAN (Radford et al., 2015) in terms of sample visual quality and mode diversity. Samples generated from these models are shown in Figure 6 2 . Figure 6: Samples generated from different generative models. For each compared model, we directly take ten decent samples reported in their corresponding papers and code repositories. Note how MDGAN samples are both globally more coherent and locally have sharp textures.
Both MDGAN and Regularized-GAN generate clear and natural-looking face images. Although ALI's samples are plausible, they are sightly deformed in comparison with those from MDGAN. The samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp.
As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions. With all four other models, the majority of generated samples suffer from some sort of distortion. However, for the samples generated by MDGAN, the level of distortion is lower compared with the other four compared models. We attribute it to the help of the autoencoder as the regularizer to alter the generation manifolds. In this way, the generator is able to learn fine-grained details such as face edges. As a result, MDGAN is able to reduce distortions. In terms of missing modes problem, we instructed five individuals to conduct human evaluation on the generated samples. They achieve consensus that MDGAN wins in terms of mode diversities. Two people pointed out that MDGAN generates a larger amount of samples with side faces than other models. We select several of these side face samples in Figure 7. Clearly, our samples maintain acceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitative results, it is convincing that our regularizers bring benefits for both training stability and mode variety without the loss of sample quality.
CONCLUSIONS
Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks, training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all the while, missing modes from the data distribution or even collapsing large amounts of probability mass on some modes. Successful GAN training usually requires large amounts of human and computing efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing.
Researchers usually rely on their own experience and published tricks and hyper-parameters instead of systematic methods for training GANs.
We provide systematic ways to measure and avoid the missing modes problem and stabilize training with the proposed autoencoder-based regularizers. The key idea is that some geometric metrics can provide more stable gradients than trained discriminators, and when combined with the encoder, they can be used as regularizers for training. These regularizers can also penalize missing modes and encourage a fair distribution of probability mass on the generation manifold.
A APPENDIX: PSEUDO CODE FOR MDGAN
In this Appendix, we give the detailed training procedure of an MDGAN example we discuss in Section 3.3.
Manifold
Step: 1. Sample {x 1 , x 2 , · · · x m } from data generating distribution p data (x). 2. Update discriminator D 1 using SGD with gradient ascent:
∇ θ 1 d 1 m m i=1 [log D 1 (x i ) + log(1 − D 1 (G(E(x i ))))]
3. Update generator G using SGD with gradient ascent:
∇ θg 1 m m i=1 [λ log D 1 (G(E(x i ))) − ||x i − G(E(x i ))|| 2 ] Diffusion
Step: 4. Sample {x 1 , x 2 , · · · x m } from data generating distribution p data (x). 5. Sample {z 1 , z 2 , · · · z m } from prior distribution p σ (z). 6. Update discriminator D 2 using SGD with gradient ascent:
∇ θ 2 d 1 m m i=1 [log D 2 (G(E(x i ))) + log(1 − D 2 (z i ))]
7. Update generator G using SGD with gradient ascent:
∇ θg 1 m m i=1 [log D 2 (G(z i ))]
B APPENDIX: ARCHITECTURE FOR EXPERIMENTS
We use similar architectures for Compositional MNIST and CelebA experiments. The architecture is based on that found in DCGAN Radford et al. (2015). Apart from the discriminator and generator which are the same as DCGAN, we add an encoder which is the "inverse" of the generator, by reversing the order of layers and replacing the de-convolutional layers with convolutional layers.
One has to pay particular attention to batch normalization layers. In DCGAN, there are batch normalization layers both in the generator and the discriminator. However, two classes of data go through the batch normalization layers in the generator. One come from sampled noise z, the other one come from the encoder. In our implementation, we separate the batch statistics for these two classes of data in the generator, while keeping the parameters of BN layer to be shared. In this way, the batch statistics of these two kinds of batches cannot interfere with each other.
C APPENDIX: ADDITIONAL SYNTHESIZED EXPERIMENTS
To demonstrate the effectiveness of mode-regularized GANs proposed in this paper, we train a very simple GAN architecture on synthesized 2D dataset, following Metz et al. (2016).
The data is sampled from a mixture of 6 Gaussians, with standard derivation of 0.1. The means of the Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden layers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The discriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to a real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of 1e-4.
In the regularized version, we choose λ 1 = λ 2 = 0.005. The comparison between the generator distribution from standard GAN and our proposed regularized GAN are shown in Figure 9. Figure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left shows heatmaps of the generator distributions as the number of training epochs increases, whereas the rightmost column presents the target, the original data distribution. The top row shows standard GAN result. The generator has a hard time oscillating among the modes of the data distribution, and is only able to "recover" a single data mode at once. In contrast, the bottom row shows results of our regularized GAN. Its generator quickly captures the underlying multiple modes and fits the target distribution.
D APPENDIX: COMPARISON WITH VAEGAN
In this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularized GANs proposed in this paper as compared to Larsen et al. (2015) in terms of its theoretical difference, sample quality and number of missing modes.
With regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic variational bound, namely p(x) ≥ E q(z|x) [log p(x|z)] − KL(q(z|x)||p(z)). This variational bound together with a GAN loss is optimized with several assumptions imposed in VAEGAN:
1. In general, VAE is based on the assumption that the true posterior p(z|x) can be well approximated by factorized Gaussian distribution q.
2. As to VAEGAN, It is also assumed that the maximum likelihood objectives does not conflict with GAN objective in terms of probabilistic framework.
The first assumption does not necessarily hold for GANs. We have found that in some trained models of DCGANs, the real posterior p(z|x) is even not guaranteed to have only one mode, not to mention it is anything close to factorized Gaussian. We believe that this difference in probabilistic framework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer. However, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plain auto-encooders works better than VAE for our purposes because as long as the model G(z) is able to generate training samples, there always exists a function E * (x) such that G(E(x)) = x. Our encoder can therefore be viewed as being trained to approximate this real encoder E * . There are no conflicts between a good GAN generator and our regularization objective. Hence, our objectives can be used as regularizers for encoding the prior knowledge that good models should be able to generate the training samples. This is why our work is essentially different from VAEGAN. In our experiments, we also believe that this is the reason why VAEGAN generates worse samples than a carefully tuned regularized GANs.
In terms of sample quality and missing modes, we run the official code of VAEGAN 3 with their default setting. We train VAEGAN for 30 epochs 4 and our models for only 20 epochs. For fairness, their model was run 3 times and the trained model with the best sample visual quality was taken for the comparison.
The generated samples are shown in Figure 10. The most obvious difference between our samples and VAEGAN's samples is the face distortion, which is consistent with our experimental results in Section 4.2.2. We conjecture that the distortions of VAEGAN's samples are due to the conflicts between the two objectives, as we present above. In other words, the way we introduce auto-encoders as regularizers for GAN models is different from VAEGAN's. The difference is that the second assumption mentioned above is not required in our approaches. In our framework, the auto-encoders helps alter the generation manifolds, leading to fewer distortions in fine-grained details in our generated samples. Figure 10: Samples generated by our models and VAEGAN. The third line are samples generated by our self-trained VAEGAN model, with default settings. The last line are generated samples reported in the original VAEGAN paper. We depict both of them here for a fair comparison.
In terms of the missing modes problem, we use the same method described in Section 4.2.1 for computing the number of images with missing modes. The results are shown below. Table 4: Number of images on the missing modes on CelebA estimated by a third-party discriminator. The numbers in the brackets indicate the dimension of prior z. σ denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to VAEGAN. We see that using our proposed regularizers results in a huge drop in the number of missing modes. We conjecture that the reason why VAEGAN performs very bad in our metric for missing modes is because the samples generated are of low quality, so the discriminator classifies the samples as "not on mode". Namely, the data generated is too far away from many real data modes. Essentially if a model generates very bad samples, we can say that the model misses all or most modes.
To conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human evaluation. Again we instructed five individuals to conduct this evaluation of sample variability. Without telling them which is generated by VAEGAN and which is generated by our methods, four people agree that our method wins in terms of sample diversity. One person thinks the samples are equally diverse.
In conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and MDGAN, are different from VAEGAN theoretically as discussed above. Such differences empirically result in better sample quality and mode preserving ability, which are our main contributions.
Figure 1 :
1Samples with very high discrimination values (D=1.0) in DCGAN model trained on CelebA dataset.
Figure 2 :
2Illustration of missing modes problem. As an example, consider the situation in Figure 2.
Figure 3 :
3The distributions of MODE scores for GAN and regularized GAN.
Figure 4 :
4(Left 1-5) Different hyperparameters for MNIST generation. The values of the λ 1 and λ 2 in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples through grid search for GAN and Regularized GAN.
3 :
3Number of images on the missing modes on CelebA estimated by a third-party discriminator. The numbers in the brackets indicate the dimension of prior z. σ denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to other methods .σ DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200)
Figure 5 :
5Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing. Right: Only DCGAN missing.
Figure 7 :
7Sideface samples generated by Regularized-GAN and MDGAN.
Figure 8 :
8The detailed training procedure of an MDGAN example.
Table 1 :
1Grid Search for Hyperparameters.nLayerG [2,3,4]
nLayerD [2,3,4]
sizeG
[400,800,1600,3200]
sizeD
[256, 512, 1024]
dropoutD [True,False]
optimG
[SGD,Adam]
optimD
[SGD,Adam]
lr
[1e-2,1e-3,1e-4]
Table 2 :
2Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg-DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence that measures the plausibility of the generated samples (like in the Inception score).Set 1
Set 2
Set 3
Set 4
#Miss KL #Miss KL #Miss KL #Miss KL
DCGAN 204.7 77.9 204.3 60.2 103.4 75.9
89.3
77.8
Reg-DCGAN
32.1
62.3
71.5
58.9
42.7
68.4
31.6
67.8
Table
This problem exists even when we use log D(G(z)) as target for the generator, as noted byDenton et al. (2015) and our experiments.
i ∇ θ log D(G θ (z i )). The missing mode problem is caused by the conjunction of two facts: (1) the areas near missing modes are rarely visited by the generator, by definition, thus providing very few examples to improve the generator around those areas, and (2) both missing modes and nonmissing modes tend to correspond to a high value of D, because the generator is not perfect so that the discriminator can take strong decisions locally and obtain a high value of D even near non-missing modes.
For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016); Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI samples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_ samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https: //github.com/Newmu/dcgan_code/
https://github.com/andersbll/autoencoding_beyond_pixels 4 Note that we also trained 20-epoch version of VAEGAN, however the samples seemed worse.
ACKNOWLEDGEMENTSWe thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want to thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search experiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us on running VAEGAN experiments. We appreciate for the valuable suggestions and comments from the anonymous reviewers. The work described in this paper was partially supported by NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science Foundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU 152094/14E), and The Hong Kong Polytechnic University (G-YBP6).
Deep generative image models using a laplacian pyramid of adversarial networks. Soumith Emily L Denton, Rob Chintala, Fergus, Advances in neural information processing systems. Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486-1494, 2015.
Adversarial feature learning. Jeff Donahue, Philipp Krähenbühl, Trevor Darrell, arXiv:1605.09782arXiv preprintJeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
Generating images with perceptual similarity metrics based on deep networks. Alexey Dosovitskiy, Thomas Brox, arXiv:1602.02644arXiv preprintAlexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016.
Ishmael Vincent Dumoulin, Ben Belghazi, Alex Poole, Martin Lamb, Olivier Arjovsky, Aaron Mastropietro, Courville, arXiv:1606.00704Adversarially learned inference. arXiv preprintVincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680, 2014.
Image-to-image translation with conditional adversarial networks. arxiv. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros, Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv, 2016.
Autoencoding beyond pixels using a learned similarity metric. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Ole Winther, arXiv:1512.09300arXiv preprintAnders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
Photo-realistic single image super-resolution using a generative adversarial network. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, arXiv:1609.04802arXiv preprintChristian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
Precomputed real-time texture synthesis with markovian generative adversarial networks. Chuan Li, Michael Wand, arXiv:1604.04382arXiv preprintChuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. arXiv preprint arXiv:1604.04382, 2016.
Deep multi-scale video prediction beyond mean square error. Michael Mathieu, Camille Couprie, Yann Lecun, arXiv:1511.05440arXiv preprintMichael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015.
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein, arXiv:1611.02163Unrolled generative adversarial networks. arXiv preprintLuke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016.
Mehdi Mirza, Simon Osindero, arXiv:1411.1784Conditional generative adversarial nets. arXiv preprintMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Plug & play generative networks: Conditional iterative generation of images in latent space. Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, Jeff Clune, arXiv:1612.00005arXiv preprintAnh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, arXiv:1511.06434arXiv preprintAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee, arXiv:1605.05396Generative adversarial text to image synthesis. arXiv preprintScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
Masaki Saito, Eiichi Matsumoto, arXiv:1611.06624Temporal generative adversarial nets. arXiv preprintMasaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprint arXiv:1611.06624, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, arXiv:1606.03498Improved techniques for training gans. arXiv preprintTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
Amortised map inference for image super-resolution. Casper Kaae, Jose Sønderby, Lucas Caballero, Wenzhe Theis, Ferenc Shi, Huszár, arXiv:1610.04490arXiv preprintCasper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
Generating videos with scene dynamics. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, Advances In Neural Information Processing Systems. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613-621, 2016.
Generative image modeling using style and structure adversarial networks. Xiaolong Wang, Abhinav Gupta, ECCV. Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar- ial networks. In ECCV, 2016.
Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Jiajun Wu, Chengkai Zhang, Tianfan Xue, T William, Joshua B Freeman, Tenenbaum, Neural Information Processing Systems (NIPS). Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural Information Processing Systems (NIPS), 2016.
Energy-based generative adversarial network. Junbo Zhao, Michael Mathieu, Yann Lecun, arXiv:1609.03126arXiv preprintJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
Learning temporal transformations from time-lapse videos. Yipin Zhou, Tamara L Berg, European Conference on Computer Vision. SpringerYipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. In European Conference on Computer Vision, pp. 262-277. Springer, 2016.
Generative visual manipulation on the natural image manifold. Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A Efros, Proceedings of European Conference on Computer Vision (ECCV). European Conference on Computer Vision (ECCV)Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016. |
239,998,253 | What Do We Mean by Generalization in Federated Learning? | "Federated learning data is drawn from a distribution of distributions: clients are drawn from a met(...TRUNCATED) | [
235613568,
231924480,
211678094,
195798643,
43964415
] | "What Do We Mean by Generalization in Federated Learning?\n\n\nHonglin Yuan \nWarren Morningstar \nL(...TRUNCATED) |
62,841,605 | SPREADING VECTORS FOR SIMILARITY SEARCH | "Discretizing multi-dimensional data distributions is a fundamental step of modern indexing methods.(...TRUNCATED) | [] | "SPREADING VECTORS FOR SIMILARITY SEARCH\n\n\nAlexandre Sablayrolles \nFacebook AI Research Inria\n\(...TRUNCATED) |
253,237,531 | MACHINE UNLEARNING OF FEDERATED CLUSTERS | "Federated clustering (FC) is an unsupervised learning problem that arises in a number of practical (...TRUNCATED) | [] | "MACHINE UNLEARNING OF FEDERATED CLUSTERS\n\n\nChao Pan [email protected] \nDepartment of Electr(...TRUNCATED) |
222,291,443 | CONTRASTIVE EXPLANATIONS FOR REINFORCEMENT LEARNING VIA EMBEDDED SELF PREDICTIONS | "We investigate a deep reinforcement learning (RL) architecture that supports explaining why a learn(...TRUNCATED) | [] | "CONTRASTIVE EXPLANATIONS FOR REINFORCEMENT LEARNING VIA EMBEDDED SELF PREDICTIONS\n\n\nZhengxian Li(...TRUNCATED) |
223,956,716 | FOR SELF-SUPERVISED LEARNING, RATIONALITY IMPLIES GENERALIZATION, PROVABLY | "We prove a new upper bound on the generalization gap of classifiers that are obtained by first usin(...TRUNCATED) | [
6212000,
67855429
] | "FOR SELF-SUPERVISED LEARNING, RATIONALITY IMPLIES GENERALIZATION, PROVABLY\n\n\nYamini Bansal \nHar(...TRUNCATED) |
263,605,472 | MULTI-TASK LEARNING WITH 3D-AWARE REGULARIZATION | "Deep neural networks have become a standard building block for designing models that can perform mu(...TRUNCATED) | [] | "MULTI-TASK LEARNING WITH 3D-AWARE REGULARIZATION\n\n\nWei-Hong Li \nUniversity of Edinburgh\n\n\nSt(...TRUNCATED) |
212,996,548 | LITE TRANSFORMER WITH LONG-SHORT RANGE ATTENTION | "Transformer has become ubiquitous in natural language processing (e.g., machine translation, questi(...TRUNCATED) | [91184134,6628106,2134321,59310641,9545399,52892477,964287,54438210,3508167,44131019,159041867,19984(...TRUNCATED) | "LITE TRANSFORMER WITH LONG-SHORT RANGE ATTENTION\n\n\nZhanghao Wu [email protected] \nMassachusetts Inst(...TRUNCATED) |
202,719,276 | ROBUST LOCAL FEATURES FOR IMPROVING THE GENERALIZATION OF ADVERSARIAL TRAINING | "Adversarial training has been demonstrated as one of the most effective methods for training robust(...TRUNCATED) | [
67855552,
58006571,
3604396,
6706414,
3488815,
17707860,
54101493,
53483414,
52898972
] | "ROBUST LOCAL FEATURES FOR IMPROVING THE GENERALIZATION OF ADVERSARIAL TRAINING\n\n\nChubiao Song cb(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
LitSearch: A Retrieval Benchmark for Scientific Literature Search
This dataset contains the query set and retrieval corpus for our paper LitSearch: A Retrieval Benchmark for Scientific Literature Search. We introduce LitSearch, a retrieval benchmark comprising 597 realistic literature search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and (2) questions about recently published papers, manually written by their authors. All LitSearch questions were manually examined or edited by experts to ensure high quality.
This dataset contains three configurations:
query
containing 597 queries accomanied by gold paper IDs, specificity and quality annotations, and metadata about the source of the query.corpus_clean
containing 64183 documents. We provide the extracted titles, abstracts and outgoing citation paper IDs.corpus_s2orc
contains the same set of 64183 documents expressed in the Semantic Scholar Open Research Corpus (S2ORC) schema along with all available metadata.
Each configuration has a single 'full' split.
Usage
You can load the configurations as follows:
from datasets import load_dataset
query_data = load_dataset("princeton-nlp/LitSearch", "query", split="full")
corpus_clean_data = load_dataset("princeton-nlp/LitSearch", "corpus_clean", split="full")
corpus_s2orc_data = load_dataset("princeton-nlp/LitSearch", "corpus_s2orc", split="full")
- Downloads last month
- 481