Text-to-Image
Diffusers
lora

How can we train it or can we load it with other LoRAs?

#2
by AeroDEmi - opened

I'm going to train some loras on Vega and want to use this to speed up the process, I did something like this:

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("segmind/Segmind-Vega", vae=vae, torch_dtype=torch.float16)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("segmind/Segmind-VegaRT", adapter_name="rt")
pipe.load_lora_weights(model_path, adapter_name="lora_1")
pipe.set_adapters(["lora_1", "rt"], adapter_weights=[1.0, 1.0])

The results on this are not good, any other method to use this with our LoRAs?

Thank you

Segmind org

You don't need to take this roundabout way, you can easily load it up like this

import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image

model_id = "segmind/Segmind-Vega"
adapter_id = "segmind/Segmind-VegaRT"

pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")

# load and fuse lcm lora
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()

My question is: how can we load another LoRA on top of it?
As far as my experiments go, my custom LoRA is not performing well when I load the VegaRT

Is there a way to train the VegaRT?

Sign up or log in to comment