[Tutorial] How to duplicate this space

#5
by shenxq - opened
  1. Download Vicuna weight follow our instruction
  2. Upload to a private hugging face model
  3. Change the code change1 change2 change3 from Vision-CAIR/vicuna to your private hugging face model name.
  4. Set space secrete with NAME: API_TOKEN, VALUE: your author token
Vision-CAIR pinned discussion
Vision-CAIR changed discussion title from [Tutorial] How to duplicate this demo to [Tutorial] How to duplicate this space

Where do I get the token?

You can generate your own user access tokens at https://huggingface.co/settings/tokens

Hi,

So I did 3x self.llama_tokenizer = LlamaTokenizer.from_pretrained('/henkvaness/minigpt4', use_fast=False, use_auth_token=os.environ[" (KEY)"])
but I got

--> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.12.1" "protobuf<4" "click<8.1"
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement pip==22.3.1 (from versions: none)
ERROR: No matching distribution found for pip==22.3.1

--> ERROR: process "/bin/sh -c pip install --no-cache-dir pip==${PIP_VERSION} && pip install --no-cache-dir datasets "huggingface-hub>=0.12.1" "protobuf<4" "click<8.1"" did not complete successfully: exit code: 1

What is written in your requirements.txt file? Do you follow ours: https://huggingface.co/spaces/Vision-CAIR/minigpt4/blob/main/requirements.txt

The instructions don't allow us to download the model ready, how we can do this?

Hi,

I have uploaded my model. But i am confused about

"Change the code change1 change2 change3 from Vision-CAIR/vicuna to your private hugging face model name."

Where do i make the changes?

If you click change1, change2, change3, it will jump to corresponding code. Then change 'Vision-CAIR/vicuna' in that line to your private model name, where you have already uploaded vicuna weight.

No description provided.

Getting this error now.

RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Getting this error now.

RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Also experiencing this error. Any updates?

It seems like there is a 15,000/month call limit on the API. Will duplicating this space to use with my paid private GPU increase (or possibly remove) this limit?

This is my first time using Hugging Face Spaces so I'm unclear if the limit is MiniGPT-4 Demo specific or a general limitation across all spaces.

Sign up or log in to comment