dylanebert HF staff commited on
Commit
9a06fc4
1 Parent(s): 8b318ef

initial commit

Browse files
Files changed (2) hide show
  1. .gitignore +3 -1
  2. README.md +20 -4
.gitignore CHANGED
@@ -5,4 +5,6 @@
5
 
6
  weights*
7
  models
8
- sd-v2*
 
 
 
5
 
6
  weights*
7
  models
8
+ sd-v2*
9
+
10
+ venv/
README.md CHANGED
@@ -1,13 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # MVDream-diffusers
2
 
3
  A **unified** diffusers implementation of [MVDream](https://github.com/bytedance/MVDream) and [ImageDream](https://github.com/bytedance/ImageDream).
4
 
5
  We provide converted `fp16` weights on huggingface:
6
- * [MVDream](https://huggingface.co/ashawkey/mvdream-sd2.1-diffusers)
7
- * [ImageDream](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers)
8
 
 
 
9
 
10
  ### Install
 
11
  ```bash
12
  # dependency
13
  pip install -r requirements.txt
@@ -27,6 +41,7 @@ python run_imagedream.py data/anya_rgba.png
27
  ### Convert weights
28
 
29
  MVDream:
 
30
  ```bash
31
  # download original ckpt (we only support the SD 2.1 version)
32
  mkdir models
@@ -40,6 +55,7 @@ python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4vi
40
  ```
41
 
42
  ImageDream:
 
43
  ```bash
44
  # download original ckpt (we only support the pixel-controller version)
45
  cd models
@@ -53,7 +69,7 @@ python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4vi
53
 
54
  ### Acknowledgement
55
 
56
- * The original papers:
57
  ```bibtex
58
  @article{shi2023MVDream,
59
  author = {Shi, Yichun and Wang, Peng and Ye, Jianglong and Mai, Long and Li, Kejie and Yang, Xiao},
@@ -68,4 +84,4 @@ python convert_mvdream_to_diffusers.py --checkpoint_path models/sd-v2.1-base-4vi
68
  year={2023}
69
  }
70
  ```
71
- * This codebase is modified from [mvdream-hf](https://github.com/KokeCacao/mvdream-hf).
 
1
+ ---
2
+ license: openrail
3
+ pipeline_tag: image-to-3d
4
+ ---
5
+
6
+ This is a duplicate of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers).
7
+
8
+ It is hosted here for the purpose of persistence and reproducibility for the ML for 3D course.
9
+
10
+ Original model card below.
11
+
12
+ ---
13
+
14
  # MVDream-diffusers
15
 
16
  A **unified** diffusers implementation of [MVDream](https://github.com/bytedance/MVDream) and [ImageDream](https://github.com/bytedance/ImageDream).
17
 
18
  We provide converted `fp16` weights on huggingface:
 
 
19
 
20
+ - [MVDream](https://huggingface.co/ashawkey/mvdream-sd2.1-diffusers)
21
+ - [ImageDream](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers)
22
 
23
  ### Install
24
+
25
  ```bash
26
  # dependency
27
  pip install -r requirements.txt
 
41
  ### Convert weights
42
 
43
  MVDream:
44
+
45
  ```bash
46
  # download original ckpt (we only support the SD 2.1 version)
47
  mkdir models
 
55
  ```
56
 
57
  ImageDream:
58
+
59
  ```bash
60
  # download original ckpt (we only support the pixel-controller version)
61
  cd models
 
69
 
70
  ### Acknowledgement
71
 
72
+ - The original papers:
73
  ```bibtex
74
  @article{shi2023MVDream,
75
  author = {Shi, Yichun and Wang, Peng and Ye, Jianglong and Mai, Long and Li, Kejie and Yang, Xiao},
 
84
  year={2023}
85
  }
86
  ```
87
+ - This codebase is modified from [mvdream-hf](https://github.com/KokeCacao/mvdream-hf).