This model is awesome

#1
by iafun - opened

impressive model, very good at understanding prompt and awesome result

where can i follow the upcoming version of this model ?

Owner

This model is a .safetensors that was placed in Civitai and converted to Huggingface Diffusers’ format. I have no idea where it was first published, but if you keep an eye on the following you won't miss it.
iNiverse Mix XL(SFW & NSFW)
https://civitai.com/models/226533/iniverse-mix-xlsfw-and-nsfw?modelVersionId=608842

thank you !

can you upload fully real XL 10 ?

https://civitai.com/models/227059/fullyrealxl

this model has been removed but if you find it elsewhere can you upload it ?

Owner

Sorry, I don't have one.😅

hey @John6666 can you upload cookie-run-character-style?

https://civitai.com/models/16068/cookie-run-character-style

i like this model by the way :p

Owner

I did it, just put the base model repo name in README.md since it's LoRA.
https://huggingface.co/John6666/cookie-run-character-style-v1-sd15-lora
Any LoRA that is fine with weights=1.0 should work this way. (as long as there is a working base model on the huggingface)

alright thanks you

hello why we can't compute prompt anymore ? Inference API seems turned off...

(In our time) since this morning?
When I tried to run inference in serverless, at least on models recently uploaded by myself and others, I got the following message and was unable to do so.
I haven't changed any of my settings, and so far there has been no announcement from HF, so I don't know if I won't be able to do this forever.😅

By the way, from Spaces, I can use it just the same as yesterday...
It's inconvenient without serverless.

error:
Text-to-Image
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

For now, temporarily, a collection of spaces where images can be generated using HF's model.
Hopefully they'll go back to the original specs...

https://huggingface.co/collections/John6666/spaces-for-text-to-images-sdxl-pony-sd15-sd30-666129d4aa688f92ce5cd563

yeah thank you for space

we can't compute as easily with space than API inference prompt interface. We are forced to change little bit prompt in space in order to have new picture. Otherwise, it's same picture poppin

i wrote a topic about this issue on the forum

if api inference are up can you upload https://civitai.com/models/502468/bigasp-v1 ?

I'm downloading from Civitai, but they say I have to wait over an hour.
I'll do it tomorrow.

maybe for now can use gr. load("user/yourmodel") . launch()

maybe for now can use gr. load("user/yourmodel") . launch()

can you explain step by step plz ?

i'm so upset, iniverse model is so good.... read on reddit that bigasp is awesome and Anteros XXXL

maybe for now can use gr. load("user/yourmodel") . launch()

can you explain step by step plz ?

i'm so upset, iniverse model is so good.... read on reddit that bigasp is awesome and Anteros XXXL

you create new spaces and create app.py file and put the code
example:

import gradio as gr

demo = gr.load("Blane187/miyako-saitou-s1-ponyxl-lora-nochekaiser", src="models").launch()

you can use any model

maybe for now can use gr. load("user/yourmodel") . launch()

can you explain step by step plz ?

i'm so upset, iniverse model is so good.... read on reddit that bigasp is awesome and Anteros XXXL

you create new spaces and create app.py file and put the code
example:

import gradio as gr

demo = gr.load("Blane187/miyako-saitou-s1-ponyxl-lora-nochekaiser", src="models").launch()

you can use any model

thank you i'm gonna test it

i am also very disappointed that fullyrealXL v10 was deleted from civitai...it was 512x512 but awesome model

no problem

If you're a free user, you don't have a GPU, so it's surprisingly slow. Even if you pay, there are some limitations.
https://huggingface.co/spaces/John6666/demo

I can't help with the erased models, but it looks like there are a few communities on HF that are archiving some of them. Hopefully they have them somewhere.
Well, I'm newer on HF, so I don't know much about it.

yeah you can find fullyrealxl9

yeah i used it but there is quality difference with v10, it seems that people complaining about author deleting his model on civitai, don't know why...

i managed to create a space by following blanes stepsguide and using your model john, it works fine. Thank you ! i can use it waiting the come back of inference api.

I find Bigasp model on huggingface, i tried to run it on a private space but i have code issues popping...maybe if you upload it cleanly i could load it from your space as other model work when i load them on my space.

HF was quicker. lol
https://huggingface.co/John6666/big-asp-v1-sdxl/

BTW, my conversion method is almost the same as the Space below in terms of code, slightly different from the official HF code, and I don't think my method is theoretically correct, but for some reason this way works better as a result.
https://huggingface.co/spaces/John6666/sdxl-to-diffusers-fp16

In fact, I just convert by the following code in my local environment, adding loops and branches as needed.

from diffusers import StableDiffusionXLPipeline, AutoencoderKL
import torch

pipe = StableDiffusionXLPipeline.from_single_file(load_path, use_safetensors=True, torch_dtype=torch.float16)
#pipe.vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) # if model has no VAE
pipe.save_pretrained(save_path, safe_serialization=True, use_safetensors=True)

thank you it works fine !!

i tried many other models from other space and got always error in the code...

i would like to try amireal44 and realdream14 that you can find among hf models

i tried to create a 2nd space but when i edit the app.py file on the new space, it modify also app.py file from the 1st space, then my first space crash...do you know why ?

I didn't know such a phenomenon could happen...
I've heard that Duplicated spaces have the ability to sync with the original space.
Why don't you dare to create a new space and upload it without Duplicate?

i don't use duplicate, i am creating new space, don't know what happen...

anyway thank you for your help and your efforts to put new models on HF John !

Thanks. Good night.
Well, HF spaces often break down. Many times I go to bed and wake up and it's fixed.

The author explicitly forbids it, so I'll avoid reprinting it 😩, but there's a new version out there.
Also, I personally downloaded it.
I'll upload it once the original is gone.

FULLY_REAL_XL
https://civitai.com/models/656670/fullyrealxl?modelVersionId=734713

FULLY_REAL_XL - F.U. Edition

F.U. Edition = FLUXed Up edition. I did not want to use FLUX in the title because this is an SDXL model not FLUX. So how is it related, this checkpoint took my most popular model and trained it on images from the incredible new FLUX model. This has pushed the model to all new levels of detail.

We are seeing the next generation of models and they are amazing. Much like SD1.5 the past year, SDXL still has a role to play while the next generation of models become optimized. It is a great option for the 1000's of existing resources and for those who lack the computing power the newer models currently demand.

Disclaimer: This model is provided for educational purposes and for the advancement of AI art in general. The creator of this checkpoint assumes no responsibility or liability for the end user's outputs. Any illegal, unethical, or deep fake use of this model is the sole responsibility of the user engaged in those activities. The end user assumes all liabilities from use of the checkpoint.

No permissions are given for anyone to distribute this model on other sites. I do not give permission for it to be used on any image generation service other than CIVITAI.

wowowwowoowoww !!!!Awesome news!!! pity that we can't use here :( my pc is low end so i can't dl it to try i'm upset

when do you think you you'll upload it ? or can i download model, upload it here and use it in a private space but it means i can also put the model as private

I'm really sorry, but as long as the author strictly forbids it, I can't make it public (except in emergency situations, like it disappearing).😓
It would be worse if the author lost motivation than the law or whatever.
I've already uploaded it to my private repo just in case, so it's already backed up on HF...

In general, it's possible to put it up on a private repo and use it only from your own space, if you use HF tokens explicitly.

But implicitly (no token needed when reading your own repo from your own space?) I have not tried to see if this is possible.
Since I'm at it, I'll experiment with a randomly selected model.

ok thank you ! i understand the author restrictions and hope he will maintain his fullyreal series because he decided to stop then he come back with another update so its efforts may be fragile even the people misuse it publically and can't control themselves to keep private use or regulate the harmful publishing of their creation with this model. We have the chance he still update the model and we just can hope he will pursue his efforts without any issues.

Anyway, the author's motivation is important. We are the beneficiaries.😊

So I tried, but if it's implicit, I can't access it even from my space.
The only thing to do is to add ,hf_token=“hf_********” in the HF call section, but if you write that in the code, everyone who can see the code can abuse your private repo.
So it is better to write a code that references the token you set to SECRET. It's not hard to do.

I can put it on a regular super-simple Gradio demo or any other demo, so if you have a base generation space you'd like, just specify it. I will modify it.

i sent a PM to the creator to explain that we like to use it in private space here because of low end pc performance that prevent us from using comfyui...let's try but i don't expect miracle. But i sent him a pm when he stopped to produce fullyreal and it seems he is sensitive also to those who praise his model and want him to pursue his series. And he resumed its models with update.

can i made a public space that you modify and then i switch to private space when you finish to edit this space ?

No problem. Or maybe you can do it yourself. The point is this.

It's a PRIVATE repo, so you shouldn't be able to see it, but it worked.
https://huggingface.co/spaces/John6666/privatetestspace

Before:

import gradio as gr
demo = gr.load("John6666/privatetestrepo", src="models").launch()
demo.launch()

After:

import gradio as gr
import os
demo = gr.load("John6666/privatetestrepo", src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
demo.launch()

and add read token to HF_TOKEN in secrets in my space setting section.

Sorry I apparently misled you.😱
The only pattern that's allowed is your access to the private model you uploaded...

Or the ones that require permission, like a corporation or something, but I don't know how to do it. I'll look into it later.
But if you have a Civitai key, it would probably be faster to do it yourself.

so we can upload a model privately ? so i have just to download model from civitai then i upload to my model space put it private and i call it from my new private space ? but i dont know if we have to modify safetensors files we upload from civitai here in the model space to make it work ?

I made this for those situations.
You'll have to “duplicate it for your own use” before you can use this tool I just created.
https://huggingface.co/spaces/John6666/sdxl-to-diffusers-v2p

By the way, the modification was done just by writing private=True.

i am going to try

i have an error occuring after i see the start converting bar

wait its start converting successfully because i i didnt paste download link address button from civitai but the web page address link of the model

The Civitai link is confusing.
I haven't tried much Civitai related stuff, so there may be bugs.

i added HF_TOKEN in secrets settings then i put my HF token value in value field but i still have runtime error :

Fetching model from: https://huggingface.co/iafun/privatetest
Traceback (most recent call last):
File "/home/user/app/app.py", line 3, in
demo = gr.load("iafun/privatetest", src="models").launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 115, in from_model
raise ModelNotFoundError(
gradio.exceptions.ModelNotFoundError: Could not find model: iafun/privatetest. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the hf_token parameter.

Is the model name correct?

iafun/privatetest

But this error is not a different token, it's an error when the model itself is not found.
Let's try restarting the space.

still same error, i don't know what happen, thank you again for your help

#demo = gr.load("iafun/privatetest", src="models").launch() # wrong

import os
demo = gr.load("iafun/privatetest", src="models", hf_token=os.environ.get("HF_TOKEN")).launch() # correct

I'm going to go have dinner.

i deleted the first line of code which is wrong and model work ! thank you john without you i won't be able to test model ! enjoy your meal !

I'm glad it worked. I've already had dinner.😀

you managed to make me work the model respecting privacy setting asked by author without knowing how it works wow. Your tool are so useful. You're really great contributor.

Development progresses better when there are some requests. So I have made some of my modified CPU spaces HF_TOKEN compliant.
If you are not satisfied with the Gradio standard, you can make it exclusive for your favorite models by just modifying model.py or all_models.py.
https://huggingface.co/spaces/John6666/Diffusion80XX4sg
https://huggingface.co/spaces/John6666/t2i-multi-demo
https://huggingface.co/spaces/John6666/t2i-multi-heavy-demo/tree/main

now there is more and more flux model that pop in civitai like GGUF which looks amazing. Can we hope Flux model will be available here like SDXL ones ?

Flux models are increasing in both HF and Civitai.
BluePencil, famous for its animated illustrations, has just released a Flux version.

However, in order to run it on HF, it has to be converted to Diffusers BF16 format (very huge), which is very hard to do at the moment.
To convert to BF16 format with the current software, I would have to use a PC with very much more RAM than I have now, or use HF's paid space.
So I am in the process of improving the conversion software manually.

I managed to convert to fp8 format, but BF16 apparently times out due to a limitation of Gradio or HF's server settings or something rather than Spaces' RAM capacity, so I have to work around this or devise something else.

Hopefully the HF staff can provide a conversion space...
(they did in SD and SDXL)
https://huggingface.co/bluepen5805/blue_pencil-flux1

yeah that's what i mean, that a conversion space or special Flux diffuser space will be implemented for straight use like the sdxl model

Or if the staff would upgrade the HF virtual machine to a version of pytorch and Diffusers that runs the fp8 model, I'd be fine with the fp8 model.
If I'm going to try to manage on my own, I'll just have to make a BF16 converter somehow.
I don't know why I'm doing it.😅

because the Flux model is amazing and will replace sdxl ones in the long run :)

Oh, by the way, there's one more problem left.
Many of the privately produced models that are out there now seem to be in the ComfyUI format, both the last live-action model and this one, BluePencil.
They cannot be converted by the official current converter.
There are at least three formats in existence: Official, ComfyUI, and Diffusers.
The program to solve this problem is very easy, in fact, even I could make it, but it is difficult to know who will implement it officially and where.

SD3 was half dead the moment it was born... Flux will be the standard if there are no other candidates.
It seems that developers who are creating learning environments for models are also directing their resources to Flux for the time being.

But as it is now, putting models in HF is just mirroring. That's not bad, and there's nothing wrong with doing it, but frankly, it's boring.
I'd like to put something that works.

i think it can be good to have many IA creator as the competition between them will lead to fast and further improvment of current models

Yes, it is. Good architectures are more likely to be created and offered to users if there is more competition.
Model authors have a hard time, but they have accumulated know-how in training models, and both WebUI and ComfyUI and model authors are now adapting to SD3 and Flux at a very fast pace.

Diffusers (github version) was also quick to support SD3 and Flux, and I think Civitai's generation service also supported it.
HF's server-side services are a bit slow this time around, and the inference API is still half dead. (Well, I wasn't here when SDXL was released, so I don't know exactly...)

yeah i was complaining on the forum because API inference is down most of the time...i wonder how i will do...i must save money for new pc

(well, 4090 is way too expensive) It's not that I don't have the money...
Anyway, migration of PC environment is very time-consuming. I wish to avoid it.

Above all, I would like to use cloud services as much as possible rather than buying a new PC every time to keep up with new releases of not only image generation AI architecture but also LLM, VLM, voice-related AI, video-related AI, and so on.
FLUX.1 is unlikely to be the last model for the human species.

In short, I'll struggle with it myself, but if HF can do something about it, that's all that matters.😭

yeah its a better solution since model evolving fast, we will be probably outdated soon even if we get a good pc setup, like you buy last graphic card generation with lot of vram thenin 3 or 6 month you will have new IA meta architecture that will consume the double. So its better to remain on cloud service.

look : https://huggingface.co/city96/FLUX.1-dev-gguf

Quantization is a smart answer to keep up with the fast evolution.
GGUF quantized is said to be more accurate but slightly less latent for image-generation.
I haven't done statistics, but it seems that most people using FLUX.1 locally are using the NF4 quantized model. (lllyasviel and sayakpaul started it and lllyasviel also maintained the Forge version of WebUI super fast)
https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4
https://huggingface.co/sayakpaul/flux.1-dev-nf4
https://github.com/lllyasviel/stable-diffusion-webui-forge
It's somehow possible to use FLUX.1 at 4bit on a PC that was using SDXL at 16bit.

It would be a quicker story if Diffusers supported both internally, but if they make it straightforward, the programmers in charge will not like it because of the added dependency of the gguf library for GGUF and bitsandbytes for NF4.

But I think the work to support quantization of Diffusers is inevitable, since quantized versions will be the norm for future model distribution and LoRA training for LoRA distribution, considering the PC environments of users.
I understand the reluctance of the programmers, but since HF is not so much in need of server resources, the server can handle the burden as long as the entire ecosystem is completed within HF.
But a community isolated from the outside world is inefficient and, above all, boring.🙄

i am surprized that SDXL architecture potential has not already been fully exploited that a new AI architecture come

I have heard that even at SD 1.5, the variety of artistic styles was established much later. (This is hearsay, but...)
The SDXL architecture is still waiting for the release of Pony 6.9.
Training models costs an inordinate amount of money anyway, and it takes a lot of personal effort and time for the means of training to become available.

I'm not a Chinese speaker, so I haven't followed it closely, but Kolors seems to be another great architecture, realized in a generation of technology close to SDXL.
I guess that means there is still room for ingenuity in tokenizers and text encoders even in the SDXL generation.
https://huggingface.co/Kwai-Kolors/Kolors

It seemed too difficult for me to deal with the HF FLUX.1 problem on my own (for very trivial reasons, but great technical difficulties).
I found a post for requests to HF, so I asked HF for a FLUX.1 converter.
https://huggingface.co/posts/victor/964839563451127

The compatibility issue with ComfyUI is ok as sayakpaul will fix Diffusers.
Then we can download the models that are about to be erased and back them up.

we have to wait HF dedicated space for this. But authors are very rapid to adapt flux as they used to trained their SDXL model. It's impressive how fast they are. I hope HF owner will react quickly to adapt Flux architecture.

a must will be API inference compute pages for Flux model

I agree. It would be easier and more reliable if the staff did it, and it would set an example for me to do something.
The FLUX.1 model is too heavy, and I'm experimenting in the ZeroGPU space, but the quota is too tight to do anything complex. To make it a stand-alone generation space, now I don't have enough quota for the ZeroGPU space (up to 10 per account).
I'm just doing it in between, which is fine, but the time I'm stuck waiting for regulations is longer than the time I'm developing.
There is a nice space about LoRA though.
https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer

he has already update his model: https://civitai.com/models/656670?modelVersionId=761055

things going too fast for me

I have a backup in HF. If my hard drive blows up, I can recover it as long as I remember the password, which is impossible if my HF blows up!
It would be easier if he'd give me permission to reprint it...
I'm sure it's traumatic for him.😓

i asked him in pm but no answer he has had issues with his past model. there is one post where he explain what happened

i'm too lazy to redo the conversion and configurate a private space for this one

hello John, do you have or can you upload this model ? https://civitai.com/models/578500/yaprm-yet-another-pony-realistic-merge

thank you John.

With all your efforts to help people i can understand i send you all thumbs up and love john !

Owner

It looks like the HF people are working hard on debugging, so I'm just free while I wait.
If the HF people were throwing away the debugging, even I would run away.😇

By the way, I don't know what other people's demands are, so it's actually easier if I get a moderate number of requests.
If there are too many requests, it would be difficult.
Maybe this is the same for HF people and developers in general.

thank you for uploading models John !

When you'll have time could you upload the V2.0 of the same model : https://civitai.com/models/578500?modelVersionId=667338

Comments says its the best version of this model

Owner

I've heard that sometimes a well-done old version is created that the author can't get past.
Anyway, it is very easy to copy and paste more than 90% of the README.md.
https://huggingface.co/John6666/yaprm-yet-another-pony-realistic-merge-v20-sdxl

i go always for last version and then i can miss good old version. It has already happened to me with juggernauth model. Thank you !

hello john,

Do you know when AI architecture will offer 2048x2048 native pictures ? 1024x1024 was a huge shift from 512x512 but 2048x2048 will be so great !

Owner

No, I have no clue about the roadmap.
That said, I'm scared of how many times more VRAM will be consumed by unet alone if it's twice as long and twice as wide.
The unet (transformer) file for Flux is also roughly 4 times the size of the SDXL...

Personally, I think the text encoder needs to be improved first. That is especially the starting point for image generation AI development in non-English speaking countries.

So let's cheat with up-sampling for now.

Hello John,

is there anyway to upload lora Flux model from civitai and use it on inference api page like this author flux model on hf : https://huggingface.co/punzel ? he managed to put lora model and we can compute on api inference but i dont know how long this lora will be up.

there is a lot of author in civitai specialized on Flux lora who do amazing work we can check there https://civitai.com/tag/celebrity?sort=Newest or

I know because it's part of my daily routine to collect LoRA in Civitai too.
However, it is indeed a bit apologetic to the author to reprint LoRA directly.
Because they get Buzz for likes and ❤ and make more LoRAs with it...

LoRA is small, so it's easy to get it working by downloading it directly, uploading it and writing a README.md, but the space below will take care of that step for you, so why not use it?
If you want to write a README.md, I'll teach you how to write one. It's not as complicated as teaching you.
https://huggingface.co/spaces/multimodalart/civitai-to-hf
https://huggingface.co/spaces/ChenoAi/civitai-to-huggingface-uploader
https://huggingface.co/spaces/Hev832/civitai-model-downloader

yeah let's take en example like this one : https://civitai.com/models/773274/joacquin-phoenix

  1. Create a new model repo
  2. Upload that LoRA file
  3. Write README.md as follows
  4. Done!
---
base_model: black-forest-labs/FLUX.1-dev
---
test

There is one thing to note here: the Flux dev is a gated repo, so you can't use it in your own space without permission. I haven't got permission either.
https://huggingface.co/black-forest-labs/FLUX.1-dev
I'll tell you how to get around that now.
By the way, LoRA for SD1.5 and SDXL will also work if you follow the same procedure and set the base model appropriately.

but how the author i mentioned above managed to get his lora work as model with inference api available ?

i uploaded lora safetensor on repo model then i create the readme.md but no api inference is available. I didn't create a space after i read your message above

https://huggingface.co/spaces/huggingface-projects/repo_duplicator
Use this space; it's HF official, don't worry.

  1. Sign in with Hugging Face
  2. set camenduru/FLUX.1-dev-diffusers to source_repo
  3. set iafun/FLUX.1-dev to dst_repo
  4. Submit
  5. Place README.md in the finished repo. The content should be written below. The content can be anything, but I'll include a licence just in case.
  6. Use iafun/FLUX.1-dev as base_model
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---

Or you can upload LoRA to your own repo and then use the space below, well, it doesn't make much difference.
https://huggingface.co/spaces/John6666/flux-lora-the-explorer
It is available for download from Civitai, but for all intents and purposes it is more comfortable to put it once on HF. Since here is HF, you know.

but how the author i mentioned above managed to get his lora work as model with inference api available ?

If the author gets permission, they can use it. It's per account.
You don't need to pay anything, just provide BFL with your email address.
If you don't want to do it on your main account, you can create a dump account.

i am stuck here

  1. Use iafun/FLUX.1-dev as base_model

i have https://huggingface.co/iafun/FLUX.1-dev model showing page with model card : Inference API (serverless) is not available, repository is disabled and i can see above right a button "use this model" when i click on the butonn i have 3 options : diffusers, draw things and diffusionbee

Nowadays, except for very famous models, that is normal.
Also, README.md is often thought to be just an explanatory file, but it is actually a program. Without it, it will not work.

As long as you have README.md, the rest should work in the usual way. If it doesn't work, you have to call it from Spaces. This is the same thing if I were to do it.

si i made some mistake at some point ? how i do now ?

but if i call it from space i cannot use lora with gradio.

I've been having conversations on the forum about the current state of Serverless Inference, so this is just the thing for explaining the current state of the art.
https://discuss.huggingface.co/t/exceeded-gpu-quota/107022

but if i call it from space i cannot use lora with gradio.

No, a LoRA repo with a basemodel set up is treated as one model, meaning it will work if you specify a LoRA repo instead of model repo.
It's a bit of a hassle though, because you need a space for each LoRA...

but i don't know how to make it work together base model + LORA. Now i have the flux model repo and a lora uploaded on a model repo. I have only 4 option for creation : new model/new dataset/new collection/new space

"new model" is fine.

  1. Create a new model repo
  2. Upload that LoRA file
  3. Write README.md as follows
  4. Done!
---
base_model: iafun/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
---
test

So is it about space? Hang on a second.
You can use this joke software to make them.
https://huggingface.co/spaces/John6666/t2i-demo-helper

i have this error :

gradio.exceptions.ModelNotFoundError: Could not find model: iafun/[loraname]. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the hf_token parameter.

i put my model on private mode in settings

i guess i have to do things in variables and secret on space settings like the other time i manage to set the fullyreal model ?

i guess i have to do things in variables and secret on space settings like the other time i manage to set the fullyreal model ?

Yes.
Incidentally, I have never tried to see if the basemodel works with private. If it can't, I thought it would be a good idea to just make the base model public, or find a base model that someone else could use. Like maybe it existed...

Or, rather, it would be easiest if we meekly provided BFL with our email addresses. I don't like something about it.

i will try this tomorrow, im going to sleep ! Thank you John for your help ! i'll update tomorrow if i manage to get it worked

find a base model that someone else could use. Like maybe it existed...

It was no good. Not found.

Good night!

hello JOhn, i made the model and the lora public and i have this runtime error : https://huggingface.co/spaces/iafun/rox

Good morning. Over here it's evening.
Maybe it is not the LoRA that needs to be made public, but the basemodel.

Even from models of the same user, private repos are basically invisible... I just found out.
That's a relief, but also an inconvenience...

By the way, I'll tell you in advance that the unusually heavy weight is a specification, because Flux models have more than 30 GB on VRAM...
About five times more than SDXL.
That's why it's better to provide BFL with an email address so LoRA can run more crisply, and that's what a lot of people do. And there's no real harm. It's just that it's kind of weird.
The famous models that everyone uses at the same time are faster because the models are stored in the cache.

i have the same error as above when i put lora private and base model public :

gradio.exceptions.ModelNotFoundError: Could not find model: iafun/[loraname]. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the hf_token parameter.

do i have to do the variable and secret thing ? i didn't remember how i did for fullyreal model

or maybe it could be easier to use your lora space ? but don't know how it works there is many parameters and i will be quota limited after awhile i guess with time out after many prompt

do i have to do the variable and secret thing ? i didn't remember how i did for fullyreal model

  1. Create and note a read token. https://huggingface.co/settings/tokens
  2. Set the read token to HF_TOKEN in the space's secret.
  3. Rewrite the following code and put it in app.py
  4. Done!
import gradio as gr
import os
repo = "John6666/privatetestrepo" # modify this
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN"))
demo.launch()

It's easier if you keep the README.md and app.py templates in a folder somewhere and copy and paste them to process and use them.
I'm rather like that too.

hey @John6666 , can you kinda make spaces for downloading lora model from civitai?

i'll appreciate it

can you kinda make spaces for downloading lora model from civitai?

I've put that feature in this space.
https://huggingface.co/spaces/John6666/flux-lora-the-explorer
Imports from Civitai to HF can be ensured by using this space created by HF staff.
https://huggingface.co/spaces/multimodalart/civitai-to-hf

i get this error now :

Fetching model from:
Traceback (most recent call last):
File "/home/user/app/app.py", line 4, in
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 370, in from_model
raise ValueError(f"Unsupported pipeline type: {p}")
ValueError: Unsupported pipeline type: None

i get this error now :

Fetching model from: https://huggingface.co/iafun/roxycook
Traceback (most recent call last):
File "/home/user/app/app.py", line 4, in
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 370, in from_model
raise ValueError(f"Unsupported pipeline type: {p}")
ValueError: Unsupported pipeline type: None

maybe you can delate .launch() after HF_TOKEN like this:

import gradio as gr
import os
repo = "nevreal/privatemodel" # modify this
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN"))
demo.launch()

Yup. Seems I made a fancy bug when I copied and pasted. I secretly fixed it.😅
Also, maybe you should specify the following in the model card (README.md). It often works without it, though.

---
base_model: iafun/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
---
test

yes i have just the base model adressed i m going to try it

i got same error : "unsupported pipeline" after removing demo.launch and modified the flux model readme.md files with code you posted above

maybe i have to modify the readme.md file from lora repo ?

maybe i have to modify the readme.md file from lora repo ?

Exactly.

i have the error about token access yet i set it on space secret very weird

When that happens, there's usually one letter wrong or something (HF__TOKEN or HF_TOKE or so). Also, the token may be invalid. (There is no fixed lifespan, but each person may erase it themselves.)

is there somthing wrong in this app.py codes ?

import gradio as gr
import os
demo = gr.load("iafun/rox", src="models", hf_token=os.environ.get("HF_TOKEN")).launch()

is there somthing wrong in this app.py codes ?

When we are able to read the error message dispassionately, we can see that in the case of grammatical errors, the computer will point out that the grammar (syntax) is wrong.
If we get an error with the wrong token, it means that the grammar itself is probably correct and only the words are wrong somewhere. Often the spelling of HF_TOKEN on the space secret side is slightly wrong or something like that. Or the token is missing for some other reason.

import gradio as gr
import os
try:
    demo = gr.load("iafun/rox", src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
except Exception as e:
    print(e)

This way, you can sometimes get a bit more detail about error. Often it doesn't show up at all, though.

runtime error
Exit code: 0. Reason: application does not seem to be initialized

Container logs:

===== Application Startup at 2024-09-20 10:22:38 =====

Fetching model from: https://huggingface.co/iafun/rox
Could not find model: iafun/rox. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the hf_token parameter.
Fetching model from: https://huggingface.co/iafun/rox
Could not find model: iafun/rox. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the hf_token parameter.

Clearly the repo name or the secret is wrong. I don't know if the reason is a spelling mistake or if the token content is buggy...
You can have as many tokens issued as you want, so you can swap them around.

ok thank you John i will check this and re-try

the repo name was wrong but i have this error message now :

runtime error
Exit code: 0. Reason: application does not seem to be initialized

Container logs:

===== Application Startup at 2024-09-20 10:34:44 =====

Fetching model from: https://huggingface.co/iafun/********
Unsupported pipeline type: None
Fetching model from: https://huggingface.co/iafun/************
Unsupported pipeline type: None

If the following statement is written and it doesn't work, I don't know why... unless the LoRA file is corrupted, I would think that the HF server would recognise the Flux pipeline on its own...
I added a few lines.

https://huggingface.co/iafun/roxycook 's README.md

---
base_model: iafun/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
  - text-to-image
  - lora
  - diffusers
  - template:sd-lora
---
test

i rewrite the readme.md files of lora repo with code above and now i have this error :

===== Application Startup at 2024-09-20 10:43:28 =====

Fetching model from: https://huggingface.co/iafun/*******
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/******
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Cannot process this value as an Image, it is of type: <class 'tuple'>

You're making steady progress. It's probably a bug in Gradio, make sure you have the latest version of Gradio in your README.md.
The latest version might be buggy, but that's the way it is!

README.md of the space

sdk_version: 4.44.0

do you think it could be a bad upload of the lora ? i don't remember well but i think i used the civitai to HF space you suggested before but i'm unsure ?

here is the readme.md of the space :

title: Rox
emoji: 💻
colorFrom: gray
colorTo: indigo
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false

You really only need to upload one safetensors file...
There's no way to make a mistake.
If you bring someone else's working LoRA file and put it there, you might be able to isolate the problem.
The only caveat is that if you put multiple LoRAs in one place, the HF will choose one of them on its own.

so i need to test with another lora

i will try later i'm tired for the moment

thank you for your help John !

see you.

i uploaded a 2nd lora safetensors and here the error after some time :

runtime error
Exit code: 0. Reason: application does not seem to be initialized

Container logs:

===== Application Startup at 2024-09-20 17:29:36 =====

Fetching model from: https://huggingface.co/iafun/*****
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/*****
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1

Perhaps we now know the cause. I'm so sorry. The base model repo I specified was wrong!
The correct one is this way.

ChuckMcSneed/FLUX.1-dev or camenduru/FLUX.1-dev-diffusers

So it should work if the base model is rebuilt.
https://huggingface.co/spaces/huggingface-projects/repo_duplicator
Use this space; it's HF official, don't worry.

  1. Sign in with Hugging Face
  2. set camenduru/FLUX.1-dev-diffusers to source_repo
  3. set iafun/FLUX.1-dev to dst_repo
  4. Submit
  5. Place README.md in the finished repo. The content should be written below. The content can be anything, but I'll include a license just in case.
  6. Use iafun/FLUX.1-dev as base_model
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---

ok i am going to try this !

i have an error when i try to set up repo with repo duplicator :

oops, you forgot to login. Please use the loggin button on the top left to migrate your repo 409 Client Error: Conflict for url: https://huggingface.co/api/models/camenduru/FLUX.1-dev-diffusers/duplicate (Request ID: Root=1-66edef2b-5e391e41382f73f35b292e17)\n\nYou already created this model repo

You already created this model repo

If you don't delete the previous repo, it won't overwrite itself. Because it's dangerous.

ok so i delete the old one then i follow your step guide above

still same error when i restart space :

Fetching model from: https://huggingface.co/iafun/******
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/*****
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Cannot process this value as an Image, it is of type: <class 'tuple'>

what is weird is that the model inference api field is open for compute on the lora model page but when i try it it says "no model found"

I've seen the repo and the settings are correct, the server recognizes it as Flux. But for some reason Inference is turned off. Don't tell me it's Flux that forces it to be off...?

Inference Examples
Text-to-Image
Inference API (serverless) is not available, repository is disabled.

it's disabled on flux model repo but active on the lora private repo but it doesnt work as ti says "no model found" when i try to compute something

it doesnt work as ti says "no model found" when i try to compute something

That's because LoRA alone doesn't work...
But I'm sure all the possible settings for the user have been done.
Even BFL officials don't have complicated settings.
I wonder if the impact of Serverless being degraded has even made its way here...
You might seriously want to register with BFL.
I mean, why does dev ask for an email address when schnell is fine...🤢
https://huggingface.co/black-forest-labs/FLUX.1-schnell

well i thought it will work on lora repo as i have saw the tree where it says lora is linked with base model which ise very smart move

well i don't know how it work with BFL....

let me clear one thing. If it worked it means that we managed ot link a lora to his base model ? so i guess that if i want to use sdxl lora with sdxl base model or use pony lora with pony base model, it is possible here on HF ?

if i want to use sdxl lora with sdxl base model or use pony lora with pony base model, it si possible ?

At least I used to be able to. I guess I'll have to try it to see if it works now...

well almost all the lora are now almost flux ones.......but there is pony and sdxl lora. And the Flux lora are being produced at a very impressive speed, i just check today and i saw like at least 10 lora on a day....

well i don't know how it work with BFL....

I've never used gated repo either, so I'm not sure how it actually works, but at least others seem to be able to use LoRA without special settings, as long as they probably have permission per account...?
Maybe it's not as complicated to set up as private repo.

I've been looking around a bit and it looks like AuraFlow and others don't work with Serverless. I'm not sure what the conditions are, but it seems like a lot of work to turn it on.

I've looked at my HF account settings and it looks like I can change this email address. Why don't you take some unwanted email address and set it up and authenticate it?

Do i need to create new account with email to make it work ?

No, if the current e-mail address you provide to the BFL is fine, just authenticate on a dev page. The only time you need to change is if you're using an important e-mail address.
I use an unimportant e-mail address too, but not all of us do.

https://huggingface.co/black-forest-labs/FLUX.1-dev

ok so i clicked on granting access and its done

So, if you set the base model to this, you're theoretically done.

black-forest-labs/FLUX.1-dev

i get an error :

"Oops, you forgot to login. Please use the loggin button on the top left to migrate your repo (Request ID: Root=1-66edff4e-59299ab272cd12903f9f3696)\n\n403 Forbidden: You can't duplicate gated models.\nCannot access content at: https://huggingface.co/api/models/black-forest-labs/FLUX.1-dev/duplicate.\nIf you are trying to create or update content,make sure you have a token with the write role."

No, I thought you didn't need to duplicate. It's good to have a backup.
It's definitely faster to use it directly. You can set up the BFL ones as the base_model instead of your repo.

i set the base model to black-forest-labs/FLUX.1-dev in readme lora repo and i still have the same error :

Fetching model from: https://huggingface.co/iafun/******
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/****
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1

Eh?😰
Sorry, indeed, I don't know what's going on.
It's quite common for Gradio and HF to not work well together, so maybe that's why?
I'll observe other people's spaces and try to bring it here.

We tried and you helped me a lot i don't want to force things. Maybe later we will have new repo functions that will be easier to set up

Right. They probably don't think it's finished either, so it's definitely a work in progress.

This one should work. Just copy it for yourself and set HF_TOKEN.
https://huggingface.co/spaces/Nymbo/flux-lab-light

thank you i will try it later

hello john,

i tested the space above and it works with lora on hf that are warm and with a trigger word. I tried to use the Flux lora i uploaded yesterday when we tested base model but it doesn't work.

is it possible to upload flux lora and make it work with nymbo space ?

is there a similar space where you can use pony lora ? but maybe the try we did yesterday when we created base model repo with lora repo can work with pony model base ? it may be easier to setup ?

is it possible to upload flux lora and make it work with nymbo space ?

Seems to work if it's in warm state, HF_TOKEN is set first, so maybe it's okay for private repo.

but maybe the try we did yesterday when we created base model repo with lora repo can work with pony model base ? it may be easier to setup ?

That may or may not be easier, but it's more reliable. It's not a big hassle and I can modify Nymbo's to make one for SDXL, but the drawback of the Nymbo space is that you have to be in warm state to use it.
With SDXL, the only one that's always warm is Animagine.

BTW, I already have my own Zero GPU space, although it's not suitable for continuous generation.
https://huggingface.co/spaces/John6666/votepurchase-multiple-model
https://huggingface.co/spaces/John6666/DiffuseCraftMod
https://huggingface.co/spaces/John6666/flux-lora-the-explorer

I tried to use the Flux lora i uploaded yesterday when we tested base model but it doesn't work.

Mysterious...
https://huggingface.co/CultriX/flux-nsfw-highress
I wonder if the contents of README.md are still missing? Or is it a private repo that doesn't go to warm status?
Why don't you try duplicating this guy with repo duplicator, for example, and see if it works?

yeah i have saw this page and iwonder how he did

but i will like just to be able to have base model (pony and flux) repo that i can link to one lora i want

Hmmm... what else is suspicious in README.md... how about the following?

instance_prompt: nsfw

Put the LoRA trigger word in this section. It should save you a lot of typing. Or, in the case of Flux, it might be essential.
SDXL and SD1.5 LoRA worked normally without this, but Flux seems to be different in many ways than expected.

---
base_model: black-forest-labs/FLUX.1-dev
tags:
  - text-to-image
  - lora
  - diffusers
  - template:sd-lora
instance_prompt: nsfw
license: apache-2.0
---
test

hello john,

i try to install pony base model with one pony lora. So i did all the steps above. And i have this error:

===== Application Startup at 2024-09-22 10:28:31 =====

Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Traceback (most recent call last):
File "/home/user/app/app.py", line 4, in
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 395, in from_model
interface = gradio.Interface(**kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 532, in init
self.render_examples()
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 880, in render_examples
self.examples_handler = Examples(
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 81, in create_examples
examples_obj.create()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 340, in create
self._start_caching()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 391, in _start_caching
client_utils.synchronize_async(self.cache)
File "/usr/local/lib/python3.10/site-packages/gradio_client/utils.py", line 855, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 517, in cache
prediction = await Context.root_block.process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1768, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 226, in postprocess
saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format)
File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 72, in save_image
raise ValueError(
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1

ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>

What is it?
I know it's returning tuples that Gradio can't handle, but maybe the behaviour changes depending on the contents of README.md, and it may or may not be an error.
What about removing the pipeline and library lines from LoRA's README.md?
At least the behaviour is likely to change.

i'm going to try

i have this error now. I am not going to insist too much if it doesn't work but i felt good to see that the space building time took long before it crashed so i don't know if you feel that i'm close to succeed in the install or if it's wiser to give up

===== Application Startup at 2024-09-22 10:53:30 =====

Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Traceback (most recent call last):
File "/home/user/app/app.py", line 4, in
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 395, in from_model
interface = gradio.Interface(**kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 532, in init
self.render_examples()
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 880, in render_examples
self.examples_handler = Examples(
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 81, in create_examples
examples_obj.create()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 340, in create
self._start_caching()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 391, in _start_caching
client_utils.synchronize_async(self.cache)
File "/usr/local/lib/python3.10/site-packages/gradio_client/utils.py", line 855, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 517, in cache
prediction = await Context.root_block.process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1768, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 226, in postprocess
saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format)
File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 72, in save_image
raise ValueError(
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1

A long SDXL model load can take approximately three minutes; a Flux would be more.😭
So, a possible sign of success...
It might be more stable to use Animagine, which is always used and cached, or a suitable Pony as a base model. It is only meant for testing purposes.
If it's in warm, the load time is practically zero.

Sign up or log in to comment