Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,7 @@ This model is just optimized and converted to Intermediate Representation (IR) u
|
|
16 |
|
17 |
We have FP16 and INT8 versions of the model. Please note currently only unet model is quantized to int8.
|
18 |
|
|
|
19 |
|
20 |
## Original Model Details
|
21 |
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
|
@@ -82,6 +83,7 @@ Using the model to generate content that is cruel to individuals is a misuse of
|
|
82 |
considerations.
|
83 |
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
|
84 |
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
|
|
|
85 |
### Bias
|
86 |
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
87 |
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
|
@@ -90,13 +92,7 @@ Texts and images from communities and cultures that use other languages are like
|
|
90 |
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
91 |
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
|
92 |
|
93 |
-
### Safety Module
|
94 |
|
95 |
-
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
|
96 |
-
This checker works by checking model outputs against known hard-coded NSFW concepts.
|
97 |
-
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
|
98 |
-
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
|
99 |
-
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
|
100 |
|
101 |
### Intel’s Human Rights Disclaimer:
|
102 |
Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.
|
|
|
16 |
|
17 |
We have FP16 and INT8 versions of the model. Please note currently only unet model is quantized to int8.
|
18 |
|
19 |
+
Intended to be used with GIMP plugin [openvino-ai-plugins-gimp](https://github.com/intel/openvino-ai-plugins-gimp.git)
|
20 |
|
21 |
## Original Model Details
|
22 |
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
|
|
|
83 |
considerations.
|
84 |
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
|
85 |
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
|
86 |
+
|
87 |
### Bias
|
88 |
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
89 |
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
|
|
|
92 |
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
93 |
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
|
94 |
|
|
|
95 |
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
### Intel’s Human Rights Disclaimer:
|
98 |
Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.
|