Fine tuning code
#3
by
alfredplpl
- opened
when
The implementation is open source on github, it's just a matter of finetuning tool authors adapting it and allowing the model to be unfrozen. OneTrainer devs are already looking at it.
Thank you.
alfredplpl
changed discussion status to
closed
Any comment on the discussion here?: https://github.com/black-forest-labs/flux/issues/9
It appears that since the publicly released models were distilled they are basically untrainable
i have made it trainable: https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md
LoRA requires about 40GB GPU. works great. haven't tested for 100k steps yet.
Legend, testing it
Did it work?
I made the same question yesterday in another thread π§΅, but can't find it anymore π
π. Sorry for spamming ππ½