End of training
Browse files- README.md +77 -0
- checkpoint-1000/optimizer.bin +3 -0
- checkpoint-1000/pytorch_lora_weights.safetensors +3 -0
- checkpoint-1000/random_states_0.pkl +3 -0
- checkpoint-1000/scheduler.bin +3 -0
- checkpoint-500/optimizer.bin +3 -0
- checkpoint-500/pytorch_lora_weights.safetensors +3 -0
- checkpoint-500/random_states_0.pkl +3 -0
- checkpoint-500/scheduler.bin +3 -0
- pytorch_lora_weights.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: THUDM/CogVideoX-5b
|
3 |
+
library_name: diffusers
|
4 |
+
license: other
|
5 |
+
tags:
|
6 |
+
- text-to-video
|
7 |
+
- diffusers-training
|
8 |
+
- diffusers
|
9 |
+
- lora
|
10 |
+
- cogvideox
|
11 |
+
- cogvideox-diffusers
|
12 |
+
- template:sd-lora
|
13 |
+
widget: []
|
14 |
+
---
|
15 |
+
|
16 |
+
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
17 |
+
should probably proofread and complete it, then remove this comment. -->
|
18 |
+
|
19 |
+
|
20 |
+
# CogVideoX LoRA Finetune
|
21 |
+
|
22 |
+
<Gallery />
|
23 |
+
|
24 |
+
## Model description
|
25 |
+
|
26 |
+
This is a lora finetune of the CogVideoX model `THUDM/CogVideoX-5b`.
|
27 |
+
|
28 |
+
The model was trained using [CogVideoX Factory](https://github.com/a-r-r-o-w/cogvideox-factory) - a repository containing memory-optimized training scripts for the CogVideoX family of models using [TorchAO](https://github.com/pytorch/ao) and [DeepSpeed](https://github.com/microsoft/DeepSpeed). The scripts were adopted from [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_lora.py).
|
29 |
+
|
30 |
+
## Download model
|
31 |
+
|
32 |
+
[Download LoRA](sayakpaul/optimizer_adamw_steps_1000_lr-schedule_cosine_with_restarts_learning-rate_5e-4/tree/main) in the Files & Versions tab.
|
33 |
+
|
34 |
+
## Usage
|
35 |
+
|
36 |
+
Requires the [🧨 Diffusers library](https://github.com/huggingface/diffusers) installed.
|
37 |
+
|
38 |
+
```py
|
39 |
+
import torch
|
40 |
+
from diffusers import CogVideoXPipeline
|
41 |
+
from diffusers.utils import export_to_video
|
42 |
+
|
43 |
+
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16).to("cuda")
|
44 |
+
pipe.load_lora_weights("sayakpaul/optimizer_adamw_steps_1000_lr-schedule_cosine_with_restarts_learning-rate_5e-4", weight_name="pytorch_lora_weights.safetensors", adapter_name="cogvideox-lora")
|
45 |
+
|
46 |
+
# The LoRA adapter weights are determined by what was used for training.
|
47 |
+
# In this case, we assume `--lora_alpha` is 32 and `--rank` is 64.
|
48 |
+
# It can be made lower or higher from what was used in training to decrease or amplify the effect
|
49 |
+
# of the LoRA upto a tolerance, beyond which one might notice no effect at all or overflows.
|
50 |
+
pipe.set_adapters(["cogvideox-lora"], [32 / 64])
|
51 |
+
|
52 |
+
video = pipe("None", guidance_scale=6, use_dynamic_cfg=True).frames[0]
|
53 |
+
export_to_video(video, "output.mp4", fps=8)
|
54 |
+
```
|
55 |
+
|
56 |
+
For more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers.
|
57 |
+
|
58 |
+
## License
|
59 |
+
|
60 |
+
Please adhere to the licensing terms as described [here](https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE) and [here](https://huggingface.co/THUDM/CogVideoX-2b/blob/main/LICENSE).
|
61 |
+
|
62 |
+
|
63 |
+
## Intended uses & limitations
|
64 |
+
|
65 |
+
#### How to use
|
66 |
+
|
67 |
+
```python
|
68 |
+
# TODO: add an example code snippet for running this diffusion pipeline
|
69 |
+
```
|
70 |
+
|
71 |
+
#### Limitations and bias
|
72 |
+
|
73 |
+
[TODO: provide examples of latent issues and potential remediations]
|
74 |
+
|
75 |
+
## Training details
|
76 |
+
|
77 |
+
[TODO: describe the data used to train the model]
|
checkpoint-1000/optimizer.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5957cfe06648a719b36df258c20bb19dfa0ee27cded17bef73d8906dc0986f56
|
3 |
+
size 528765442
|
checkpoint-1000/pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:de7d7e5f8d6f1a1213422327ccd63e1fa80c240a0289c47c429df42a4d34c104
|
3 |
+
size 264286184
|
checkpoint-1000/random_states_0.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5e56b9a93e95f00a8f005254a3fdedae9b28c328c0ed3c7b98f93c42c63d5a17
|
3 |
+
size 16036
|
checkpoint-1000/scheduler.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:75d0a3c113e0e254478ca997e40a0ca3cd98a47b65c32b5517228868633eb773
|
3 |
+
size 1000
|
checkpoint-500/optimizer.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4068ccec0767dd769574d71b505516ec10d562bd598e6763ff211a16c3bc3636
|
3 |
+
size 528765442
|
checkpoint-500/pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2c5bf0543d95f9c1b6692dc5f65c1564613afe1d2ebc043d44e2303a3745817e
|
3 |
+
size 264286184
|
checkpoint-500/random_states_0.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0a5be9851df666641eeced0c9fe90edfdecc803a7f5a2955cb062f8c343a24b5
|
3 |
+
size 16100
|
checkpoint-500/scheduler.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:66a43edcdbf6545db5bd7aa3deec02de4fc9938b9ecb37d1801b628d038985d2
|
3 |
+
size 1000
|
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:de7d7e5f8d6f1a1213422327ccd63e1fa80c240a0289c47c429df42a4d34c104
|
3 |
+
size 264286184
|