City
city96
AI & ML interests
LLMs, Diffuser models, Anime
Organizations
None yet
city96's activity
InvokeAI support ?
2
#7 opened 5 days ago
by
HuggingAny
Q4_0, Q4_1, Q5_0, Q5_1 can be dropped?
1
#1 opened 4 days ago
by
CHNtentes
SD3.5 medium pls
1
#1 opened 11 days ago
by
CassyPrivate
it'd be awesome if nf4 was provided
1
#2 opened 13 days ago
by
MayensGuds
how many steps needed ?
1
#3 opened 12 days ago
by
pikkaa
Not able to download it
1
#38 opened 16 days ago
by
iffishells
need fp8 for speed
3
#1 opened 17 days ago
by
Ai11Ali
Does it work with gguf of t5xxl ?
5
#2 opened 17 days ago
by
razvanab
Create an equivalent to GGUF for Diffusers models?
1
#37 opened 16 days ago
by
julien-c
which vae file should be used with this gguf?
6
#2 opened 17 days ago
by
slacktahr
The load dual clip GGUF can use this encoder, what what about clip_i?
2
#9 opened 23 days ago
by
witchercher
Forge support and updated convert script.
3
#1 opened 26 days ago
by
city96
llama cpp
5
#31 opened 2 months ago
by
goodasdgood
Is it using ggml to compute?
1
#30 opened 2 months ago
by
CHNtentes
How to use the model?
1
#8 opened 2 months ago
by
AIer0107
FLUX GGUF conversion
1
#29 opened 2 months ago
by
bsingh1324
code for use this quantized model
3
#18 opened 3 months ago
by
Mahdimohseni0333
cant see the Q5_K_M gguf quant
2
#28 opened 3 months ago
by
Ai11Ali
Method for quantizing and converting FluxDev to GGUF?
6
#27 opened 3 months ago
by
Melyn
how do I use this can't load t5 gguf with clip l safetensor
10
#6 opened 3 months ago
by
MANOFAi94