Safetensors
English
falcon_mamba
4-bit precision
bitsandbytes

Cannot load quantized version of the model into a 15 GB VRAM gpu?

#3
by perceptron-743 - opened

I just had a little query about this model. Is it not possible to load this model into a 15 GB VRAM of google colab? I have been trying to load it using the following quantization config:

# defining the config
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
)

But it gives cuda out of memory error every single time.

If it can't be loaded, then I'm confused as to why not? Since mistral-7B loads just fine and it's safetensors take a lot more memory than this model's. So by contrast I feel this model should load. But maybe it's because of something I have done wrong, that it doesn't load. I would really appreciate it if you resolved my query.

Here is the code which I have used to load and quantize it.


def generate_result(summary_model: str, config: BitsAndBytesConfig,
                    document_ids: Dict[str, str], device = "cpu") -> List[Dict[str, str]]:
    # summarization models
    summarizer = pipeline("summarization", model=summary_model,
                          device = device, quantization_config=config)

    # zero-shot-classification models
    docs = []
    for document_name, document_id in tqdm.tqdm(document_ids.items()):
        print("-"*100)
        print("Document Name: %s" % document_name)

        # timing the duration
        begin = time.time()
        texts = get_pdf_by_code(document_id)
        summary = summarizer(texts, max_length=300, truncation=True, do_sample=False)

        summary = " ".join(item["summary_text"] for item in summary)
        pprint.pprint("-"*100)
        duration = time.time() - begin

        docs.append({
            "document_name": document_name,
            "summary": summary,
            "seconds": duration,
            "model_name": summary_model,
        })

    return docs

model_checkpoint = "tiiuae/falcon-mamba-7b-instruct-4bit"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
output = generate_result(summary_model=model_checkpoint,
                         config = nf4_config, document_ids=hashcodes, device=device)
df = pd.DataFrame(output)
Technology Innovation Institute org
edited Aug 22

Hi @perceptron-743
Thanks for raising this up, for loading this model you don't need to pass a quantization config, you can load it directly without passing any extra argument in from_pretrained. Maybe there is a weird interaction if you pass a quantization config to an already quantized model

Technology Innovation Institute org

Also, you will need to pass the quantization through model_kwargs as passing it through quantization_config in the pipeline init will be ignored by the pipeline.

    summarizer = pipeline("summarization", model=summary_model, model_kwargs= {"quantization_config": config})

Sign up or log in to comment