how to get transparent background?

#6
by molo322 - opened

i only get black background

Thanks for the suggestion. I just updated the codes, and all the results you can obtain from all tabs are in RGBA png images.

Thanks for the suggestion. I just updated the codes, and all the results you can obtain from all tabs are in RGBA png images.

大佬,我刚才在本地运行app_local.py,测试后发现,只有第3个tab批处理图片的下载结果是png格式,另外2个tab的下载结果都是webp格式,看起来不是透明背景的。大佬有空可以看下这个问题,其他效果都非常棒,批处理很实用!给大佬点赞!

Thanks for the suggestion. I just updated the codes, and all the results you can obtain from all tabs are in RGBA png images.

大佬,我刚才在本地运行app_local.py,测试后发现,只有第3个tab批处理图片的下载结果是png格式,另外2个tab的下载结果都是webp格式,看起来不是透明背景的。大佬有空可以看下这个问题,其他效果都非常棒,批处理很实用!给大佬点赞!

大佬,我尝试修改了下您这边的代码,在output中强制限定了前2个tab的图片输出格式为png,目前有效,所以跟大佬你说下。

tab_image = gr.Interface(
fn=predict,
inputs=[
gr.Image(label='Upload an image'),
gr.Textbox(lines=1, placeholder="Type the resolution (WxH) you want, e.g., 1024x1024. Higher resolutions can be much slower for inference.", label="Resolution"),
gr.Radio(list(usage_to_weights_file.keys()), value='General', label="Weights", info="Choose the weights you want.")
],
outputs=gr.Image(label="BiRefNet's prediction", type="pil", format="png"), # 添加 format='png'
examples=examples,
api_name="image",
description=descriptions,
)

tab_text = gr.Interface(
fn=predict,
inputs=[
gr.Textbox(label="Paste an image URL"),
gr.Textbox(lines=1, placeholder="Type the resolution (WxH) you want, e.g., 1024x1024. Higher resolutions can be much slower for inference.", label="Resolution"),
gr.Radio(list(usage_to_weights_file.keys()), value='General', label="Weights", info="Choose the weights you want.")
],
outputs=gr.Image(label="BiRefNet's prediction", type="pil", format="png"), # 添加 format='png'
examples=examples_url,
api_name="text",
description=descriptions+'\nTab-URL is partially modified from https://huggingface.co/spaces/not-lain/background-removal, thanks to this great work!',
)

Hello, 非常感谢哈! 那我现在就修改一下. 就是两处gr.Image(..., format='png')就可以了是吧?

还有一个, 就是online的是ImageSlider, 我去找了下似乎它没有指定format的选项, 别人有交这方面的PR, 但还没被merge.

Hello, 非常感谢哈! 那我现在就修改一下. 就是两处gr.Image(..., format='png')就可以了是吧?

是的。

还有一个, 就是online的是ImageSlider, 我去找了下似乎它没有指定format的选项, 别人有交这方面的PR, 但还没被merge.

好的,那可以再等等的。我一般都是本地运行这些项目,可以大量使用,Huggingface上在线用每天貌似有次数限制,呜呜

这样的呀, 这我还不知道呢. 本地推理的话, 其实你可以用我github仓库里的tutorials/BiRefNet_inference.ipynb, 可能也很方便.

这样的呀, 这我还不知道呢. 本地推理的话, 其实你可以用我github仓库里的tutorials/BiRefNet_inference.ipynb, 可能也很方便.

我是做AI项目教程分享的,一般用户还是喜欢Webui,不太会用那个notebook,哈哈。另外,刚看到大佬已经更新了Huggingface的代码,点赞!!

这样的呀, 这我还不知道呢. 本地推理的话, 其实你可以用我github仓库里的tutorials/BiRefNet_inference.ipynb, 可能也很方便.

大佬,那几个细分的权重效果和General权重效果区别大吗?我使用Portrait权重时,报错了,提示没有设置HOME。使用General权重可以正常运行。

D:\Miniconda\envs\BiRefNet\lib\site-packages\huggingface_hub\file_download.py:157: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in G:\AI_Models\hub\models--zhengpeng7--BiRefNet-portrait. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
BiRefNet_config.py: 100%|█████████████████████████████████████████████████████████████████████| 298/298 [00:00<?, ?B/s]
A new version of the following files was downloaded from https://huggingface.co/zhengpeng7/BiRefNet-portrait:

  • BiRefNet_config.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    birefnet.py: 91.3kB [00:00, 687kB/s]
    A new version of the following files was downloaded from https://huggingface.co/zhengpeng7/BiRefNet-portrait:
  • birefnet.py
    . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
    Traceback (most recent call last):
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\gradio\queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\gradio\blocks.py", line 1935, in process_api
    result = await self.call_function(
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\gradio\blocks.py", line 1520, in call_function
    prediction = await anyio.to_thread.run_sync( # type: ignore
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\anyio_backends_asyncio.py", line 2357, in run_sync_in_worker_thread
    return await future
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\anyio_backends_asyncio.py", line 864, in run
    result = context.run(func, *args)
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\gradio\utils.py", line 826, in wrapper
    response = f(*args, **kwargs)
    File "G:\AI_TS\BiRefNet_demo\app_local.py", line 98, in predict
    birefnet = AutoModelForImageSegmentation.from_pretrained(_weights_file, trust_remote_code=True)
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\transformers\models\auto\auto_factory.py", line 551, in from_pretrained
    model_class = get_class_from_dynamic_module(
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\transformers\dynamic_module_utils.py", line 514, in get_class_from_dynamic_module
    return get_class_in_module(class_name, final_module)
    File "D:\Miniconda\envs\BiRefNet\lib\site-packages\transformers\dynamic_module_utils.py", line 212, in get_class_in_module
    module_spec.loader.exec_module(module)
    File "", line 883, in exec_module
    File "", line 241, in _call_with_frames_removed
    File "G:\AI_Models\modules\transformers_modules\zhengpeng7\BiRefNet-portrait\55d494eda797e149e7656e7f20a1c7e817ee0934\birefnet.py", line 1392, in
    config = Config()
    File "G:\AI_Models\modules\transformers_modules\zhengpeng7\BiRefNet-portrait\55d494eda797e149e7656e7f20a1c7e817ee0934\birefnet.py", line 10, in init
    self.sys_home_dir = os.environ['HOME'] # Make up your file system as: SYS_HOME_DIR/codes/dis/BiRefNet, SYS_HOME_DIR/datasets/dis/xx, SYS_HOME_DIR/weights/xx
    File "D:\Miniconda\envs\BiRefNet\lib\os.py", line 679, in getitem
    raise KeyError(key) from None
    KeyError: 'HOME'
    Using weights: zhengpeng7/BiRefNet.

你是在windows上吗, 那环境变量确实没有HOME. 我去把其他几个权重也适配下, 或者你把HOME 那个随便改个路径都可以.

Hi, @walkingwithGod . I've updated that line in the config of all the models. It should be okay now.

Hello @ZhengPeng7
I don't know Chinese so I didn't understand how to get a transparent background.
I'm only getting white backgrounds, how can I get transparent backgrounds? Could you please explain in English?

We talked about using the app_local.py to get results locally, where gradio imagesilder was not used, and there are no problems in all batches now.
If you want to directly use the online demo here, you can use the tab_batch to get results. First two tabs can only show images in webp format, which is the problem of gradio imagesilder as their issue shows.

Sign up or log in to comment