TypeError: convert_url_to_diffusers_repo() takes from 4 to 22 positional arguments but 24 were given.

#9
by xi0v - opened

@John6666
Hello, I get this now when trying to convert models

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
    response = await route_utils.call_process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
TypeError: convert_url_to_diffusers_repo() takes from 4 to 22 positional arguments but 24 were given

Still in development, but this one might work better.
https://huggingface.co/spaces/John6666/sdxl-to-diffusers-v2-test

Still in development, but this one might work better.
https://huggingface.co/spaces/John6666/sdxl-to-diffusers-v2-test

img.jpg

Sorry, I missed the URL. I mean, the build is stuck forever and won't start...
Can't provide the correct URL.
I think this is HF changing the default settings for space again.
Might not be fixed until tomorrow. It's possible that the merger could be improved instead, for example...

Edit:
No it doesn't work at all...good night.
https://huggingface.co/spaces/John6666/sdxl-to-diffusers-v2-cliptest
https://huggingface.co/spaces/John6666/sdxl-to-diffusers-v2p
This one is not rebuilt and may barely work.

Looks like someone in HF fucked up big time ๐Ÿ’€

wtf.png

Something is genuinely not right, I cloned the cliptest space and it also has the same problem

The 1970 (zero in Linux time) problem is fairly common. The real problem is probably a mistake in changing Python startup or optimization options.
I've had this happen before for a moment and it was back to normal within half a day.

I cloned the cliptest space and it also has the same problem

All right! Let's go to bed!
That's what we do when it seems like all our thinking and handiwork is futile.

It seems like everything is back to normal!

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
    response = await route_utils.call_process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
  File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 341, in convert_url_to_diffusers_repo
    new_path = convert_url_to_diffusers_sdxl(dl_url, civitai_key, hf_token, is_upload_sf, half, vae, scheduler, lora_dict, False, clip)
  File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 269, in convert_url_to_diffusers_sdxl
    kwargs["vae"] = my_vae
UnboundLocalError: local variable 'my_vae' referenced before assignment
 

I fixed it, but the problem is that the bug is supposed to be caused by a failed VAE download. I wonder if it would fail even though it's a definite...?
Maybe the error is still in HF. Take care.

Edit:
Maybe the HF error was caused by trying to fix this. Maybe there was a misconfiguration.
https://discuss.huggingface.co/t/attn-hf-staff-space-stuck-building-indefinitely/111415/21

fixed

Great! The problem was when I specified a vae download link it would download the vae and convert then if I kept the same settings and just replaced the checkpoint download link and repo name it would fail.

I see. So that's the kind of branching!
This was a bit complicated because it was originally being refactored to be a model converter for the community that would support multiple types of models.
Since SD3.5 has just been released, I have changed my original plan and am now manually implementing the fp8 conversion for large models.
I wish the from_pretrained() in the transformers library supported fp8 so I don't have to do it manually, but I have no choice.

I wish the from_pretrained() in the transformers library supported fp8 so I don't have to do it manually, but I have no choice.

Sounds like we have a new GitHub issue to be opened!

It seems like an easy mistake, but I wonder if it's enough to open an issue?
I don't have an account...

Code

from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.float8_e4m3fn)

Error

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
    response = await route_utils.call_process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
    response = f(*args, **kwargs)
  File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 240, in convert_url_to_diffusers_repo
    new_path = convert_url_to_diffusers(dl_url, civitai_key, is_upload_sf, dtype, vae, clip, scheduler, ema, base_repo, lora_dict, is_local)
  File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 199, in convert_url_to_diffusers
    pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.float8_e4m3fn)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
  File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3772, in from_pretrained
    dtype_orig = cls._set_default_torch_dtype(torch_dtype)
  File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1586, in _set_default_torch_dtype
    torch.set_default_dtype(dtype)
  File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 1198, in set_default_dtype
    _C._set_default_dtype(d)
TypeError: couldn't find storage object Float8_e4m3fnStorage

Dependencies

safetensors
transformers==4.44.0
diffusers==0.30.3
pytorch_lightning
peft
sentencepiece
torch

Edit:
@sayakpaul @dn6 Thank you for your work on the on-memory NF4 quantization support for Diffusers.

I'm not sure if this is caused by Diffusers, Transformers, or PyTorch, but it is a bug or specification that did not occur in the version timed before the summer, around when Flux was introduced. At that time, I was able to save_pretrained() with SDXL in torch.float8_e4m3fn.

Another bug that also has an ambiguous cause is the following. The following has been around for at least about 6 months or more. Personally, I think it may be related to the FrozenDict in Pipeline classes, where the change of state_dict content itself is not reflected as in the normal torch module.
https://huggingface.co/spaces/diffusers/sd-to-diffusers/discussions/17

but I wonder if it's enough to open an issue?

It actually is, it's a "Quality of Life" kind of update which makes it easier to load models in fp8 (which is so good for flux)

Maybe we can get a PR opened and merged. If you know anyone that is a contributor to diffusers please ping them here!

I sent a mention.

A mysterious error that I've been seeing from time to time in all spaces lately. I didn't see it before. I happened to be able to reproduce it, so I'm posting it here.

runtime error
Exit code: 6. Reason:  0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 5 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 4 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 3 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 2 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 1 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Container logs:

===== Application Startup at 2024-10-23 23:31:40 =====

A mysterious error that I've been seeing from time to time in all spaces lately. I didn't see it before. I happened to be able to reproduce it, so I'm posting it here.

runtime error
Exit code: 6. Reason:  0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 5 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 4 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 3 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 2 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Warning: Problem : timeout. Will retry in 1 seconds. 1 retries left.

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (6) Could not resolve host: huggingface.co
Container logs:

===== Application Startup at 2024-10-23 23:31:40 =====

That's weird, it could be that space sends requests to HF cluster During the starting phase (which is normal) but maybe something was down for a few moments.

maybe something was down for a few moments.

It occurs in both CPU space and Zero GPU space, so it must be a server issue for HF.

I tried to make an SD 3.5 converter, but it seems that Diffusers' Pipeline for SD 3.5 still doesn't work as decently as Flux and other models...
I can do the conversion on a per torch module basis.
I was able to get it to a working state with many features omitted.
Well, it's a test space for a while.
https://huggingface.co/spaces/John6666/safetensors_to_diffusers

It occurs in both CPU space and Zero GPU space, so it must be a server issue for HF.

Most definitely.

I tried to make an SD 3.5 converter, but it seems that Diffusers' Pipeline for SD 3.5 still doesn't work as decently as Flux and other models...
I can do the conversion on a per torch module basis.

I'd wait a few days for diffusers to optimize the pipeline and for people to even trust stabilityai with making good T2I models and use SD3.5

I was able to get it to a working state with many features omitted.

Amazing!

Well, it's a test space for a while.

What features work and what doesn't?

I'd wait a few days

I'd like that. There's definitely a bug somewhere in the pattern that no one, including me, has noticed.

What features work and what doesn't?

NF4 quantization-related functions were omitted. Otherwise, it works. However, it will only work on free CPU space up to fp8 of flux if you are lucky. (Specifically, it depends on the size of the file you download.)
If you duplicate it on Zero GPU space, which does not use GPU, you can do most things.
With 16GB of RAM and 50GB of storage, inevitably...๐Ÿ˜…

The NF4 native support related to diffusers has only been merged into main for 3 days. Quantization support has been implemented for the model classes, but not yet for the pipeline related parts.
So it is not yet as easy as transformers from load to save.

Amazing work!
We should wait a few days for diffusers to regulate the quantization support in pipelines.

Also, does Votepurchase-multiple-model support V/Epsilon-Prediciton?

Also, does Votepurchase-multiple-model support V/Epsilon-Prediciton?

I can't do it alone. I will get a github account soon, but I don't have one right now.
By the way, there are very few changes to do. Specifically, I think it will be done by modifying the following dict.
https://github.com/R3gm/stablepy/blob/main/stablepy/diffusers_vanilla/constants.py#L142
https://huggingface.co/docs/diffusers/api/schedulers/euler_ancestral

I will get a github account

GitHub is generally easy to sign up to, you don't have to get to know the "git" stuff because you probably know them from HF

I think it will be done by modifying the following dict.

If you are able to implement it, I can test it out .

I consulted r3gm about the scheduler. I'll be refactoring DiffuseCraftMod today and tomorrow.
Github will be after that.

Great!

Any update on eps/V-Pred?
Even if it's a seperate GUI?

r3gm is a busy man. Nothing special at the moment, but when he says he'll do it, he'll do it.
Euler AYS will be hard to support in the library if it doesn't get into the pip version of Diffusers, but vpred has been around for a long time.
BTW, I got a github account so I'm debugging stuff.

r3gm added support for v-pred and AYS.
It is still in the testing phase, but it seems to basically work. Note that it may not work in some combinations, though.

Also, HF is super buggy right now.
https://discuss.huggingface.co/t/space-runs-ok-for-several-hours-then-runtime-error/115638

r3gm added support for v-pred and AYS.

I just tested out the update and it's truly amazing!

Note that it may not work in some combinations, though.

Yeah that's expected, hopefully it'll work with more combinations soon.

Also, HF is super buggy right now.

That's weird, I havent had such problems in a while

Sign up or log in to comment