Spaces:
Running
on
A10G
Higher resolution and speed?
Hi.
The demo works great when adapted to google colab.
Is there a way to increase resolution of output images (return 512512 even when input is 10241024).
Also the generation is pretty slow. Is there a way to increase speed?
Currently there's an auto-conversion from any size to 512x512, that is by design because the base model used ( Stable Diffusion 2.1
) does not work well on higher resolutions.
We will soon release XLEDITS - for Stable Diffusion XL - but of course there will an even steeper trade-off with speed. Regarding the colab adaptation, it's awesome you've made it! Would you be willing to share with the community?
Great, lets wait the SDXL version in the future.
My colab version is just to run your gradio app in colab.
I just need to add some automatic editing of the app.py from a colab cell to activate public link of the gradio app, and I will share it.
I have already pasted the simple code in an answer to the request for colab demo here
https://huggingface.co/spaces/editing-images/leditsplusplus/discussions/6#656bb81072c19de7235379c4
We are currently working on an integration into diffusers.
But if you are eager to use the SDXL version right now, feel free to install diffusers from here:pip install git+https://github.com/ml-research/diffusers@ledits_pp
Than you can use the LEditsPPPipelineStableDiffusionXL
pipeline.
How about use the SDXL turbo or LCM model to speed up?
How about use the SDXL turbo or LCM model to speed up?
Yes I will try the sdxl ledits pipeline, maybe sdxl turbo is working
We are currently working on an integration into diffusers.
But if you are eager to use the SDXL version right now, feel free to install diffusers from here:pip install git+https://github.com/ml-research/diffusers@ledits_pp
Than you can use the
LEditsPPPipelineStableDiffusionXL
pipeline.
I get this error when trying to run in colab, when the inference steps are completed
RuntimeError Traceback (most recent call last)
in <cell line: 26>()
24 )
25
---> 26 edited_image = pipe(
27 editing_prompt=["glasses","sunglasses"],
28 reverse_editing_direction=[True,False],
8 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
454 weight, bias, self.stride,
455 _pair(0), self.dilation, self.groups)
--> 456 return F.conv2d(input, weight, bias, self.stride,
457 self.padding, self.dilation, self.groups)
458
RuntimeError: Input type (c10::Half) and bias type (float) should be the same