Update README.md
Browse files
README.md
CHANGED
@@ -4,16 +4,17 @@ license: mit
|
|
4 |
|
5 |
## Latte: Latent Diffusion Transformer for Video Generation
|
6 |
|
7 |
-
This repo contains pre-trained weights for our paper exploring
|
|
|
8 |
|
9 |
## News
|
10 |
-
- (π₯ New) May. 23, 2024. π₯
|
11 |
|
12 |
- (π₯ New) Mar. 20, 2024. π₯ An updated LatteT2V model is coming soon, stay tuned!
|
13 |
|
14 |
- (π₯ New) Feb. 24, 2024. π₯ We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte [discord](https://discord.gg/RguYqhVU92) channel is created for discussions. Coders are welcome to contribute.
|
15 |
|
16 |
-
- (π₯ New) Jan. 9, 2024. π₯ An updated LatteT2V model initialized with the [PixArt-Ξ±](https://github.com/PixArt-alpha/PixArt-alpha) is released, the checkpoint can be found [here](https://huggingface.co/maxin-cn/
|
17 |
|
18 |
- (π₯ New) Oct. 31, 2023. π₯ The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found [here](https://huggingface.co/maxin-cn/Latte/tree/main). In addition, the LatteT2V inference code is provided.
|
19 |
|
|
|
4 |
|
5 |
## Latte: Latent Diffusion Transformer for Video Generation
|
6 |
|
7 |
+
This repo contains pre-trained weights on FaceForensics, SkyTimelapse, UCF101, and Taichi-HD for our paper exploring latent diffusion models with transformers (Latte). You can find more visualizations on our [project page](https://maxin-cn.github.io/latte_project/).
|
8 |
+
If you want to obtain text-to-video generation pre-trained weights, please refer to [here](https://huggingface.co/maxin-cn/LatteT2V).
|
9 |
|
10 |
## News
|
11 |
+
- (π₯ New) May. 23, 2024. π₯ **Latte-1** for Text-to-video generation is released! You can download pre-trained model [here](https://huggingface.co/maxin-cn/LatteT2V/tree/main/transformer_v1). Latte-1 also supports Text-to-image generation, please run bash sample/t2i.sh.
|
12 |
|
13 |
- (π₯ New) Mar. 20, 2024. π₯ An updated LatteT2V model is coming soon, stay tuned!
|
14 |
|
15 |
- (π₯ New) Feb. 24, 2024. π₯ We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. Our Latte [discord](https://discord.gg/RguYqhVU92) channel is created for discussions. Coders are welcome to contribute.
|
16 |
|
17 |
+
- (π₯ New) Jan. 9, 2024. π₯ An updated LatteT2V model initialized with the [PixArt-Ξ±](https://github.com/PixArt-alpha/PixArt-alpha) is released, the checkpoint can be found [here](https://huggingface.co/maxin-cn/LatteT2V/tree/main/transformer).
|
18 |
|
19 |
- (π₯ New) Oct. 31, 2023. π₯ The training and inference code is released. All checkpoints (including FaceForensics, SkyTimelapse, UCF101, and Taichi-HD) can be found [here](https://huggingface.co/maxin-cn/Latte/tree/main). In addition, the LatteT2V inference code is provided.
|
20 |
|