Fp32 version?
+1
why?
salt was going to but idk when
why?
Like if you want to create your own finetunes or stuff on it, it's kind of better to have the fp32
I'll upload fp32 of the base model today. There really isn't that much benefit to the fp32 version though tbh, training / inference precision is what will make a difference. You aren't losing very many training steps worth of information by upcasting fp16 to fp32 during training instead of using the fp32 weights.
I'll upload fp32 of the base model today.
Thanks you. Can you tell please what was the learning rate and batch size when training this model? I want to do additional training unet model wd1.5
so no fp32?
Final whack of training was done mixed precision, fp16 with fp32 accumulate. There's no meaningful difference between fp16 and fp32 weights, and this particular project didn't work out as well as we would've liked, so no fp32, no.