yujiepan commited on
Commit
47cb901
1 Parent(s): e2183fc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: diffusers
3
+ base_model: runwayml/stable-diffusion-v1-5
4
+ tags:
5
+ - text-to-image
6
+ license: creativeml-openrail-m
7
+ inference: true
8
+ ---
9
+
10
+ ## yujiepan/dreamshaper-8-lcm-openvino
11
+
12
+ This model applies [latent-consistency/lcm-lora-sdv1-5](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5)
13
+ on base model [Lykon/dreamshaper-8](https://huggingface.co/Lykon/dreamshaper-8), and is converted as OpenVINO **FP16** format.
14
+
15
+ #### Usage
16
+
17
+ ```python
18
+ from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
19
+ pipeline = OVStableDiffusionPipeline.from_pretrained(
20
+ 'yujiepan/dreamshaper-8-lcm-openvino',
21
+ device='CPU',
22
+ )
23
+ prompt = 'cute dog typing at a laptop, 4k, details'
24
+ images = pipeline(prompt=prompt, num_inference_steps=8, guidance_scale=1.0).images
25
+ ```
26
+
27
+ ![output image](./assets/cute-dog-typing-at-a-laptop-4k-details.png)
28
+
29
+
30
+ #### Scripts
31
+
32
+ The model is generated by the following codes:
33
+
34
+ ```python
35
+ import torch
36
+ from diffusers import AutoPipelineForText2Image, LCMScheduler
37
+ from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
38
+
39
+ base_model_id = "Lykon/dreamshaper-8"
40
+ adapter_id = "latent-consistency/lcm-lora-sdv1-5"
41
+ save_torch_folder = './dreamshaper-8-lcm'
42
+ save_ov_folder = './dreamshaper-8-lcm-openvino'
43
+
44
+ torch_pipeline = AutoPipelineForText2Image.from_pretrained(
45
+ base_model_id, torch_dtype=torch.float16, variant="fp16")
46
+ torch_pipeline.scheduler = LCMScheduler.from_config(
47
+ torch_pipeline.scheduler.config)
48
+ # load and fuse lcm lora
49
+ torch_pipeline.load_lora_weights(adapter_id)
50
+ torch_pipeline.fuse_lora()
51
+ torch_pipeline.save_pretrained(save_torch_folder)
52
+
53
+ ov_pipeline = OVStableDiffusionPipeline.from_pretrained(
54
+ save_torch_folder,
55
+ device='CPU',
56
+ export=True,
57
+ )
58
+ ov_pipeline.half()
59
+ ov_pipeline.save_pretrained(save_ov_folder)
60
+ ```