patrickvonplaten echarlaix HF staff commited on
Commit
4621659
1 Parent(s): 76d28af

fix-readme (#109)

Browse files

- update readme (e3de54693ba9846e7ba3194ed987ce114291189d)


Co-authored-by: Ella Charlaix <[email protected]>

Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -146,12 +146,12 @@ pip install optimum[openvino]
146
  To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
147
 
148
  ```diff
149
- - from diffusers import StableDiffusionPipeline
150
- + from optimum.intel import OVStableDiffusionPipeline
151
 
152
  model_id = "stabilityai/stable-diffusion-xl-base-1.0"
153
- - pipeline = StableDiffusionPipeline.from_pretrained(model_id)
154
- + pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
155
  prompt = "A majestic lion jumping from a big stone at night"
156
  image = pipeline(prompt).images[0]
157
  ```
@@ -170,12 +170,12 @@ pip install optimum[onnxruntime]
170
  To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
171
 
172
  ```diff
173
- - from diffusers import StableDiffusionPipeline
174
- + from optimum.onnxruntime import ORTStableDiffusionPipeline
175
 
176
  model_id = "stabilityai/stable-diffusion-xl-base-1.0"
177
- - pipeline = StableDiffusionPipeline.from_pretrained(model_id)
178
- + pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
179
  prompt = "A majestic lion jumping from a big stone at night"
180
  image = pipeline(prompt).images[0]
181
  ```
 
146
  To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
147
 
148
  ```diff
149
+ - from diffusers import StableDiffusionXLPipeline
150
+ + from optimum.intel import OVStableDiffusionXLPipeline
151
 
152
  model_id = "stabilityai/stable-diffusion-xl-base-1.0"
153
+ - pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
154
+ + pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
155
  prompt = "A majestic lion jumping from a big stone at night"
156
  image = pipeline(prompt).images[0]
157
  ```
 
170
  To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
171
 
172
  ```diff
173
+ - from diffusers import StableDiffusionXLPipeline
174
+ + from optimum.onnxruntime import ORTStableDiffusionXLPipeline
175
 
176
  model_id = "stabilityai/stable-diffusion-xl-base-1.0"
177
+ - pipeline = StableDiffusionXLPipeline.from_pretrained(model_id)
178
+ + pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
179
  prompt = "A majestic lion jumping from a big stone at night"
180
  image = pipeline(prompt).images[0]
181
  ```