Spaces:
Running
on
Zero
Local install?
I have installed the app on Windows in venv, but I get this error on startup:
Traceback (most recent call last):
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\app.py", line 10, in
import spaces
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\spaces_init_.py", line 10, in
from .gpu.decorator import GPU
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\spaces\gpu\decorator.py", line 18, in
from .wrappers import regular_function_wrapper
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\spaces\gpu\wrappers.py", line 39, in
Process = multiprocessing.get_context('fork').Process
File "C:\Users\Mykee\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 243, in get_context
return super().get_context(method)
File "C:\Users\Mykee\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 193, in get_context
raise ValueError('cannot find context for %r' % method) from None
ValueError: cannot find context for 'fork'
How can I use your app on my PC?
Hi
@Mykee
, thanks for reporting this. It might be a bug on the spaces
library. It's not needed to run the Space locally, so I think it will work if you comment out these three lines:
https://huggingface.co/spaces/hysts/BLIP2-with-transformers/blob/1cfb0d6180f3f52f1edc346b3fa90679436270c4/app.py#L10
https://huggingface.co/spaces/hysts/BLIP2-with-transformers/blob/1cfb0d6180f3f52f1edc346b3fa90679436270c4/app.py#L33
https://huggingface.co/spaces/hysts/BLIP2-with-transformers/blob/1cfb0d6180f3f52f1edc346b3fa90679436270c4/app.py#L61
Perfect! Working now on my PC, thank you for quick help!
Ok, app has started, but when push Caption it! button, I get this:
To create a public link, set share=True
in launch()
.
Traceback (most recent call last):
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\venv\lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "I:\Stable-Diffusion-Automatic\BLIP2-with-transformers\app.py", line 45, in generate_caption
inputs = processor(images=image, return_tensors="pt").to(device, torch.float16)
NameError: name 'processor' is not defined
It seems that lines 28-30 had to be modified for this, because 8-bit also caused problems, but I'm still testing:
if torch.cuda.is_available():
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = Blip2ForConditionalGeneration.from_pretrained(MODEL_ID, device_map="auto")
else:
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = Blip2ForConditionalGeneration.from_pretrained(MODEL_ID)
@Mykee
In this Space, the model and processor is loaded only when CUDA is available as you can see here.
Is CUDA installed in your environment? If not, maybe you can run the model by replacing those lines with the following:
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = Blip2ForConditionalGeneration.from_pretrained(MODEL_ID)
I haven't tried to run this Space on CPU, so I'm not sure but probably you need to make a bit more changes. For example, I think you need to remove torch.float16
from these lines:
https://huggingface.co/spaces/hysts/BLIP2-with-transformers/blob/1cfb0d6180f3f52f1edc346b3fa90679436270c4/app.py#L45
https://huggingface.co/spaces/hysts/BLIP2-with-transformers/blob/1cfb0d6180f3f52f1edc346b3fa90679436270c4/app.py#L74
Also, the MODEL_ID
is set to T5-XXL by default in this Space, which is very large, so you might want to try smaller model first, like Salesforce/blip2-opt-2.7b
, to see if the code works for your environment.
Yes, I have a CUDA environment, an RTX 3090 card. It already takes the XXL model, but I will replace it, thanks!
Great working with blip2-opt-2.7b model! Thank you for all help!