lev1 commited on
Commit
2a84613
1 Parent(s): 63074f5

required readme

Browse files
Files changed (1) hide show
  1. README.md +10 -460
README.md CHANGED
@@ -1,463 +1,13 @@
1
-
2
-
3
-
4
- # Text2Video-Zero
5
-
6
- This repository is the official implementation of [Text2Video-Zero](https://arxiv.org/abs/2303.13439).
7
-
8
-
9
- **[Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators](https://arxiv.org/abs/2303.13439)**
10
- </br>
11
- Levon Khachatryan,
12
- Andranik Movsisyan,
13
- Vahram Tadevosyan,
14
- Roberto Henschel,
15
- [Zhangyang Wang](https://www.ece.utexas.edu/people/faculty/atlas-wang), Shant Navasardyan, [Humphrey Shi](https://www.humphreyshi.com)
16
- </br>
17
-
18
- [Paper](https://arxiv.org/abs/2303.13439) | [Video](https://www.dropbox.com/s/uv90mi2z598olsq/Text2Video-Zero.MP4?dl=0) | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PAIR/Text2Video-Zero) | [Project](https://text2video-zero.github.io/)
19
-
20
-
21
- <p align="center">
22
- <img src="__assets__/github/teaser/teaser_final.png" width="800px"/>
23
- <br>
24
- <em>Our method Text2Video-Zero enables zero-shot video generation using (i) a textual prompt (see rows 1, 2), (ii) a prompt combined with guidance from poses or edges (see lower right), and (iii) Video Instruct-Pix2Pix, i.e., instruction-guided video editing (see lower left).
25
- Results are temporally consistent and follow closely the guidance and textual prompts.</em>
26
- </p>
27
-
28
- ## News
29
-
30
- * [03/23/2023] Paper [Text2Video-Zero](https://arxiv.org/abs/2303.13439) released!
31
- * [03/25/2023] The [first version](https://huggingface.co/spaces/PAIR/Text2Video-Zero) of our huggingface demo (containing `zero-shot text-to-video generation` and `Video Instruct Pix2Pix`) released!
32
- * [03/27/2023] The [full version](https://huggingface.co/spaces/PAIR/Text2Video-Zero) of our huggingface demo released! Now also included: `text and pose conditional video generation`, `text and canny-edge conditional video generation`, and
33
- `text, canny-edge and dreambooth conditional video generation`.
34
- * [03/28/2023] Code for all our generation methods released! We added a new low-memory setup. Minimum required GPU VRAM is currently **12 GB**. It will be further reduced in the upcoming releases.
35
- * [03/29/2023] Improved [Huggingface demo](https://huggingface.co/spaces/PAIR/Text2Video-Zero)! (i) For text-to-video generation, **any base model for stable diffusion** and **any dreambooth model** hosted on huggingface can now be loaded! (ii) We improved the quality of Video Instruct-Pix2Pix. (iii) We added two longer examples for Video Instruct-Pix2Pix.
36
- * [03/30/2023] New code released! It includes all improvements of our latest huggingface iteration. See the news update from `03/29/2023`. In addition, generated videos (text-to-video) can have **arbitrary length**.
37
-
38
-
39
- ## Contribute
40
- We are on a journey to democratize AI and empower the creativity of everyone, and we believe Text2Video-Zero is a great research direction to unleash the zero-shot video generation and editing capacity of the amazing text-to-image models!
41
-
42
- To achieve this goal, all contributions are welcome. Please check out these external implementations and extensions of Text2Video-Zero. We thank the authors for their efforts and contributions:
43
- * https://github.com/JiauZhang/Text2Video-Zero
44
- * https://github.com/camenduru/text2video-zero-colab
45
- * https://github.com/SHI-Labs/Text2Video-Zero-sd-webui
46
-
47
-
48
-
49
- ## Setup
50
-
51
-
52
- 1. Clone this repository and enter:
53
-
54
- ```shell
55
- git clone https://github.com/Picsart-AI-Research/Text2Video-Zero.git
56
- cd Text2Video-Zero/
57
- ```
58
- 2. Install requirements using Python 3.9 and CUDA >= 11.6
59
- ```shell
60
- virtualenv --system-site-packages -p python3.9 venv
61
- source venv/bin/activate
62
- pip install -r requirements.txt
63
- ```
64
-
65
-
66
- <!--- Installing [xformers](https://github.com/facebookresearch/xformers) is highly recommended for more efficiency and speed on GPUs.
67
-
68
- ### Weights
69
-
70
- #### Text-To-Video with Pose Guidance
71
-
72
- Download the pose model weights used in [ControlNet](https://arxiv.org/abs/2302.05543):
73
- ```shell
74
- wget -P annotator/ckpts https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/hand_pose_model.pth
75
- wget -P annotator/ckpts https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/body_pose_model.pth
76
- ```
77
-
78
-
79
- <!---
80
- #### Text-To-Video
81
- Any [Stable Diffusion](https://arxiv.org/abs/2112.10752) v1.4 model weights in huggingface format can be used and must be placed in `models/text-to-video`.
82
- For instance:
83
-
84
- ```shell
85
- git lfs install
86
- git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 model_weights
87
- mv model_weights models/text-to-video
88
- ```
89
-
90
- #### Video Instruct-Pix2Pix
91
- From [Instruct-Pix2Pix](https://arxiv.org/pdf/2211.09800.pdf) download pretrained model files:
92
- ```shell
93
- git lfs install
94
- git clone https://huggingface.co/timbrooks/instruct-pix2pix models/instruct-pix2pix
95
- ```
96
-
97
- #### Text-To-Video with Pose Guidance
98
- From [ControlNet](https://arxiv.org/abs/2302.05543), download the open pose model file:
99
- ```shell
100
- mkdir -p models/control
101
- wget -P models/control https://huggingface.co/lllyasviel/ControlNet/resolve/main/models/control_sd15_openpose.pth
102
- ```
103
- #### Text-To-Video with Edge Guidance
104
- From [ControlNet](https://arxiv.org/abs/2302.05543), download the Canny edge model file:
105
- ```shell
106
- mkdir -p models/control
107
- wget -P models/control https://huggingface.co/lllyasviel/ControlNet/resolve/main/models/control_sd15_canny.pth
108
- ```
109
-
110
-
111
- ### Weights
112
-
113
-
114
- #### Text-To-Video with Edge Guidance and Dreambooth
115
-
116
-
117
-
118
- We provide already prepared model files derived from CIVITAI for `anime` (keyword `1girl`), `arcane style` (keyword `arcane style`) `avatar` (keyword `avatar style`) and `gta-5 style` (keyword `gtav style`).
119
- --->
120
-
121
- <!---
122
- To this end, download the model files from [google drive](https://drive.google.com/drive/folders/1uwXNjJ-4Ws6pqyjrIWyVPWu_u4aJrqt8?usp=share_link) and extract them into `models/control_db/`.
123
- --->
124
-
125
-
126
-
127
- ## Inference API
128
-
129
- To run inferences create an instance of `Model` class
130
- ```python
131
- import torch
132
- from model import Model
133
-
134
- model = Model(device = "cuda", dtype = torch.float16)
135
- ```
136
-
137
- ---
138
-
139
-
140
- ### Text-To-Video
141
- To directly call our text-to-video generator, run this python command which stores the result in `tmp/text2video/A_horse_galloping_on_a_street.mp4` :
142
- ```python
143
- prompt = "A horse galloping on a street"
144
- params = {"t0": 44, "t1": 47 , "motion_field_strength_x" : 12, "motion_field_strength_y" : 12, "video_length": 8}
145
-
146
- out_path, fps = f"./text2video_{prompt.replace(' ','_')}.mp4", 4
147
- model.process_text2video(prompt, fps = fps, path = out_path, **params)
148
- ```
149
-
150
- To use a different stable diffusion base model run this python command:
151
- ```python
152
- from hf_utils import get_model_list
153
- model_list = get_model_list()
154
- for idx, name in enumerate(model_list):
155
- print(idx, name)
156
- idx = int(input("Select the model by the listed number: ")) # select the model of your choice
157
- model.process_text2video(prompt, model_name = model_list[idx], fps = fps, path = out_path, **params)
158
- ```
159
-
160
-
161
- #### Hyperparameters (Optional)
162
-
163
- You can define the following hyperparameters:
164
- * **Motion field strength**: `motion_field_strength_x` = $\delta_x$ and `motion_field_strength_y` = $\delta_x$ (see our paper, Sect. 3.3.1). Default: `motion_field_strength_x=motion_field_strength_y= 12`.
165
- * $T$ and $T'$ (see our paper, Sect. 3.3.1). Define values `t0` and `t1` in the range `{0,...,50}`. Default: `t0=44`, `t1=47` (DDIM steps). Corresponds to timesteps `881` and `941`, respectively.
166
- * **Video length**: Define the number of frames `video_length` to be generated. Default: `video_length=8`.
167
-
168
-
169
- ---
170
-
171
- ### Text-To-Video with Pose Control
172
- To directly call our text-to-video generator with pose control, run this python command:
173
- ```python
174
- prompt = 'an astronaut dancing in outer space'
175
- motion_path = '__assets__/poses_skeleton_gifs/dance1_corr.mp4'
176
- out_path = f"./text2video_pose_guidance_{prompt.replace(' ','_')}.gif"
177
- model.process_controlnet_pose(motion_path, prompt=prompt, save_path=out_path)
178
- ```
179
-
180
-
181
- ---
182
-
183
- ### Text-To-Video with Edge Control
184
- To directly call our text-to-video generator with edge control, run this python command:
185
- ```python
186
- prompt = 'oil painting of a deer, a high-quality, detailed, and professional photo'
187
- video_path = '__assets__/canny_videos_mp4/deer.mp4'
188
- out_path = f'./text2video_edge_guidance_{prompt}.mp4'
189
- model.process_controlnet_canny(video_path, prompt=prompt, save_path=out_path)
190
- ```
191
-
192
- #### Hyperparameters
193
-
194
- You can define the following hyperparameters for Canny edge detection:
195
- * **low threshold**. Define value `low_threshold` in the range $(0, 255)$. Default: `low_threshold=100`.
196
- * **high threshold**. Define value `high_threshold` in the range $(0, 255)$. Default: `high_threshold=200`. Make sure that `high_threshold` > `low_threshold`.
197
-
198
- You can give hyperparameters as arguments to `model.process_controlnet_canny`
199
-
200
- ---
201
-
202
-
203
- ### Text-To-Video with Edge Guidance and Dreambooth specialization
204
- Load a dreambooth model then proceed as described in `Text-To-Video with Edge Guidance`
205
- ```python
206
-
207
- prompt = 'your prompt'
208
- video_path = 'path/to/your/video'
209
- dreambooth_model_path = 'path/to/your/dreambooth/model'
210
- out_path = f'./text2video_edge_db_{prompt}.gif'
211
- model.process_controlnet_canny_db(dreambooth_model_path, video_path, prompt=prompt, save_path=out_path)
212
- ```
213
-
214
- The value `video_path` can be the path to a `mp4` file. To use one of the example videos provided, set `video_path="woman1"`, `video_path="woman2"`, `video_path="woman3"`, or `video_path="man1"`.
215
-
216
-
217
- The value `dreambooth_model_path` can either be a link to a diffuser model file, or the name of one of the dreambooth models provided. To this end, set `dreambooth_model_path = "Anime DB"`, `dreambooth_model_path = "Avatar DB"`, `dreambooth_model_path = "GTA-5 DB"`, or `dreambooth_model_path = "Arcane DB"`. The corresponding keywords are: `1girl` (for `Anime DB`), `arcane style` (for `Arcane DB`) `avatar style` (for `Avatar DB`) and `gtav style` (for `GTA-5 DB`).
218
-
219
-
220
- To load custom Dreambooth models, [transfer](https://github.com/lllyasviel/ControlNet/discussions/12) control to the custom model and [convert](https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py) it to diffuser format. Then, the value of `dreambooth_model_path` must link to the folder containing the diffuser file. Dreambooth models can be obtained, for instance, from [CIVITAI](https://civitai.com).
221
-
222
-
223
-
224
- ---
225
-
226
-
227
-
228
- ### Video Instruct-Pix2Pix
229
-
230
- To perform pix2pix video editing, run this python command:
231
- ```python
232
- prompt = 'make it Van Gogh Starry Night'
233
- video_path = '__assets__/pix2pix video/camel.mp4'
234
- out_path = f'./video_instruct_pix2pix_{prompt}.mp4'
235
- model.process_pix2pix(video_path, prompt=prompt, save_path=out_path)
236
- ```
237
-
238
- ---
239
-
240
- ### Low Memory Inference
241
- Each of the above introduced interface can be run in a low memory setup. In the minimal setup, a GPU with **12 GB VRAM** is sufficient.
242
-
243
- To reduce the memory usage, add `chunk_size=k` as additional parameter when calling one of the above defined inference APIs. The integer value `k` must be in the range `{2,...,video_length}`. It defines the number of frames that are processed at once (without any loss in quality). The lower the value the less memory is needed.
244
-
245
- When using the gradio app, set `chunk_size` in the `Advanced options`.
246
-
247
-
248
- We plan to release soon a new version that further reduces the memory usage.
249
-
250
-
251
  ---
252
-
253
-
254
- ### Ablation Study
255
- To replicate the ablation study, add additional parameters when calling the above defined inference APIs.
256
- * To deactivate `cross-frame attention`: Add `use_cf_attn=False` to the parameter list.
257
- * To deactivate enriching latent codes with `motion dynamics`: Add `use_motion_field=False` to the parameter list.
258
-
259
-
260
- Note: Adding `smooth_bg=True` activates background smoothing. However, our code does not include the salient object detector necessary to run that code.
261
-
262
-
263
  ---
264
 
265
- ## Inference using Gradio
266
- From the project root folder, run this shell command:
267
- ```shell
268
- python app.py
269
- ```
270
-
271
- Then access the app [locally](http://127.0.0.1:7860) with a browser.
272
-
273
- To access the app remotely, run this shell command:
274
- ```shell
275
- python app.py --public_access
276
- ```
277
- For security information about public access we refer to the documentation of [gradio](https://gradio.app/sharing-your-app/#security-and-file-access).
278
-
279
-
280
-
281
- ## Results
282
-
283
- ### Text-To-Video
284
- <table class="center">
285
- <tr>
286
- <td><img src="__assets__/github/results/t2v/cat_running.gif" raw=true></td>
287
- <td><img src="__assets__/github/results/t2v/playing.gif"></td>
288
- <td><img src="__assets__/github/results/t2v/running.gif"></td>
289
- <td><img src="__assets__/github/results/t2v/skii.gif"></td>
290
- </tr>
291
- <tr>
292
- <td width=25% align="center">"A cat is running on the grass"</td>
293
- <td width=25% align="center">"A panda is playing guitar on times square"</td>
294
- <td width=25% align="center">"A man is running in the snow"</td>
295
- <td width=25% align="center">"An astronaut is skiing down the hill"</td>
296
- </tr>
297
-
298
- <tr>
299
- <td><img src="__assets__/github/results/t2v/panda_surfing.gif" raw=true></td>
300
- <td><img src="__assets__/github/results/t2v/bear_dancing.gif"></td>
301
- <td><img src="__assets__/github/results/t2v/bicycle.gif"></td>
302
- <td><img src="__assets__/github/results/t2v/horse_galloping.gif"></td>
303
- </tr>
304
- <tr>
305
- <td width=25% align="center">"A panda surfing on a wakeboard"</td>
306
- <td width=25% align="center">"A bear dancing on times square"</td>
307
- <td width=25% align="center">"A man is riding a bicycle in the sunshine"</td>
308
- <td width=25% align="center">"A horse galloping on a street"</td>
309
- </tr>
310
-
311
- <tr>
312
- <td><img src="__assets__/github/results/t2v/tiger_walking.gif" raw=true></td>
313
- <td><img src="__assets__/github/results/t2v/panda_surfing_2.gif"></td>
314
- <td><img src="__assets__/github/results/t2v/horse_galloping_2.gif"></td>
315
- <td><img src="__assets__/github/results/t2v/cat_walking.gif"></td>
316
- </tr>
317
- <tr>
318
- <td width=25% align="center">"A tiger walking alone down the street"</td>
319
- <td width=25% align="center">"A panda surfing on a wakeboard"</td>
320
- <td width=25% align="center">"A horse galloping on a street"</td>
321
- <td width=25% align="center">"A cute cat running in a beatiful meadow"</td>
322
- </tr>
323
-
324
-
325
- <tr>
326
- <td><img src="__assets__/github/results/t2v/horse_galloping_3.gif" raw=true></td>
327
- <td><img src="__assets__/github/results/t2v/panda_walking.gif"></td>
328
- <td><img src="__assets__/github/results/t2v/dog_walking.gif"></td>
329
- <td><img src="__assets__/github/results/t2v/astronaut.gif"></td>
330
- </tr>
331
- <tr>
332
- <td width=25% align="center">"A horse galloping on a street"</td>
333
- <td width=25% align="center">"A panda walking alone down the street"</td>
334
- <td width=25% align="center">"A dog is walking down the street"</td>
335
- <td width=25% align="center">"An astronaut is waving his hands on the moon"</td>
336
- </tr>
337
-
338
-
339
- </table>
340
-
341
- ### Text-To-Video with Pose Guidance
342
-
343
-
344
- <table class="center">
345
- <tr>
346
- <td><img src="__assets__/github/results/pose2v/img_bot_left.gif" raw=true><img src="__assets__/github/results/pose2v/pose_bot_left.gif"></td>
347
- <td><img src="__assets__/github/results/pose2v/img_bot_right.gif" raw=true><img src="__assets__/github/results/pose2v/pose_bot_right.gif"></td>
348
- <td><img src="__assets__/github/results/pose2v/img_top_left.gif" raw=true><img src="__assets__/github/results/pose2v/pose_top_left.gif"></td>
349
- <td><img src="__assets__/github/results/pose2v/img_top_right.gif" raw=true><img src="__assets__/github/results/pose2v/pose_top_right.gif"></td>
350
- </tr>
351
- <tr>
352
- <td width=25% align="center">"A bear dancing on the concrete"</td>
353
- <td width=25% align="center">"An alien dancing under a flying saucer"</td>
354
- <td width=25% align="center">"A panda dancing in Antarctica"</td>
355
- <td width=25% align="center">"An astronaut dancing in the outer space"</td>
356
-
357
- </tr>
358
- </table>
359
-
360
- ### Text-To-Video with Edge Guidance
361
-
362
-
363
-
364
- <table class="center">
365
- <tr>
366
- <td><img src="__assets__/github/results/edge2v/butterfly.gif" raw=true><img src="__assets__/github/results/edge2v/butterfly_edge.gif"></td>
367
- <td><img src="__assets__/github/results/edge2v/head.gif" raw=true><img src="__assets__/github/results/edge2v/head_edge.gif"></td>
368
- <td><img src="__assets__/github/results/edge2v/jelly.gif" raw=true><img src="__assets__/github/results/edge2v/jelly_edge.gif"></td>
369
- <td><img src="__assets__/github/results/edge2v/mask.gif" raw=true><img src="__assets__/github/results/edge2v/mask_edge.gif"></td>
370
- </tr>
371
- <tr>
372
- <td width=25% align="center">"White butterfly"</td>
373
- <td width=25% align="center">"Beautiful girl"</td>
374
- <td width=25% align="center">"A jellyfish"</td>
375
- <td width=25% align="center">"beautiful girl halloween style"</td>
376
- </tr>
377
-
378
- <tr>
379
- <td><img src="__assets__/github/results/edge2v/fox.gif" raw=true><img src="__assets__/github/results/edge2v/fix_edge.gif"></td>
380
- <td><img src="__assets__/github/results/edge2v/head_2.gif" raw=true><img src="__assets__/github/results/edge2v/head_2_edge.gif"></td>
381
- <td><img src="__assets__/github/results/edge2v/santa.gif" raw=true><img src="__assets__/github/results/edge2v/santa_edge.gif"></td>
382
- <td><img src="__assets__/github/results/edge2v/dear.gif" raw=true><img src="__assets__/github/results/edge2v/dear_edge.gif"></td>
383
- </tr>
384
- <tr>
385
- <td width=25% align="center">"Wild fox is walking"</td>
386
- <td width=25% align="center">"Oil painting of a beautiful girl close-up"</td>
387
- <td width=25% align="center">"A santa claus"</td>
388
- <td width=25% align="center">"A deer"</td>
389
- </tr>
390
-
391
- </table>
392
-
393
-
394
- ### Text-To-Video with Edge Guidance and Dreambooth specialization
395
-
396
-
397
-
398
-
399
- <table class="center">
400
- <tr>
401
- <td><img src="__assets__/github/results/canny_db/anime_style.gif" raw=true><img src="__assets__/github/results/canny_db/anime_edge.gif"></td>
402
- <td><img src="__assets__/github/results/canny_db/arcane_style.gif" raw=true><img src="__assets__/github/results/canny_db/arcane_edge.gif"></td>
403
- <td><img src="__assets__/github/results/canny_db/gta-5_man_style.gif" raw=true><img src="__assets__/github/results/canny_db/gta-5_man_edge.gif"></td>
404
- <td><img src="__assets__/github/results/canny_db/img_bot_right.gif" raw=true><img src="__assets__/github/results/canny_db/edge_bot_right.gif"></td>
405
- </tr>
406
- <tr>
407
- <td width=25% align="center">"anime style"</td>
408
- <td width=25% align="center">"arcane style"</td>
409
- <td width=25% align="center">"gta-5 man"</td>
410
- <td width=25% align="center">"avatar style"</td>
411
- </tr>
412
-
413
- </table>
414
-
415
-
416
- ## Video Instruct Pix2Pix
417
-
418
- <table class="center">
419
- <tr>
420
- <td><img src="__assets__/github/results/Video_InstructPix2Pix/frame_1/up_left.gif" raw=true><img src="__assets__/github/results/Video_InstructPix2Pix/frame_1/bot_left.gif"></td>
421
- <td><img src="__assets__/github/results/Video_InstructPix2Pix/frame_1/up_mid.gif" raw=true><img src="__assets__/github/results/Video_InstructPix2Pix/frame_1/bot_mid.gif"></td>
422
- <td><img src="__assets__/github/results/Video_InstructPix2Pix/frame_1/up_right.gif" raw=true><img src="__assets__/github/results/Video_InstructPix2Pix/frame_1/bot_right.gif"></td>
423
- </tr>
424
- <tr>
425
- <td width=25% align="center">"Replace man with chimpanze"</td>
426
- <td width=25% align="center">"Make it Van Gogh Starry Night style"</td>
427
- <td width=25% align="center">"Make it Picasso style"</td>
428
- </tr>
429
-
430
- <tr>
431
- <td><img src="__assets__/github/results/Video_InstructPix2Pix/frame_2/up_left.gif" raw=true><img src="__assets__/github/results/Video_InstructPix2Pix/frame_2/bot_left.gif"></td>
432
- <td><img src="__assets__/github/results/Video_InstructPix2Pix/frame_2/up_mid.gif" raw=true><img src="__assets__/github/results/Video_InstructPix2Pix/frame_2/bot_mid.gif"></td>
433
- <td><img src="__assets__/github/results/Video_InstructPix2Pix/frame_2/up_right.gif" raw=true><img src="__assets__/github/results/Video_InstructPix2Pix/frame_2/bot_right.gif"></td>
434
- </tr>
435
- <tr>
436
- <td width=25% align="center">"Make it Expressionism style"</td>
437
- <td width=25% align="center">"Make it night"</td>
438
- <td width=25% align="center">"Make it autumn"</td>
439
- </tr>
440
- </table>
441
-
442
-
443
- ## Related Links
444
-
445
- * [High-Resolution Image Synthesis with Latent Diffusion Models (a.k.a. LDM & Stable Diffusion)](https://ommer-lab.com/research/latent-diffusion-models/)
446
- * [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://www.timothybrooks.com/instruct-pix2pix/)
447
- * [Adding Conditional Control to Text-to-Image Diffusion Models (a.k.a ControlNet)](https://github.com/lllyasviel/ControlNet)
448
- * [Diffusers](https://github.com/huggingface/diffusers)
449
-
450
- ## License
451
- Our code is published under the CreativeML Open RAIL-M license. The license provided in this repository applies to all additions and contributions we make upon the original stable diffusion code. The original stable diffusion code is under the CreativeML Open RAIL-M license, which can found [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
452
-
453
-
454
- ## BibTeX
455
- If you use our work in your research, please cite our publication:
456
- ```
457
- @article{text2video-zero,
458
- title={Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators},
459
- author={Khachatryan, Levon and Movsisyan, Andranik and Tadevosyan, Vahram and Henschel, Roberto and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
460
- journal={arXiv preprint arXiv:2303.13439},
461
- year={2023}
462
- }
463
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Text2Video-Zero
3
+ emoji: 🚀
4
+ colorFrom: green
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: 3.23.0
8
+ app_file: app.py
9
+ pinned: false
10
+ pipeline_tag: text-to-video
 
 
11
  ---
12
 
13
+ Paper: https://arxiv.org/abs/2303.13439