Edit model card

image/png

This is a 2-bit quantization of @pankajmathur orca_mini_v3_70b using quip# (https://cornell-relaxml.github.io/quip-sharp/) with hessian context lenght 4k.

Inference with the model is a bit slow, but this should be the best 70b model you can use without offloading on a 3090.

Prompt Format:

### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.

### User:
Tell me about Orcas.

### Assistant:

I have included the quip library I have used in this repo. I am able to use this model in the widely known textgen-webui. For installation I suggest to follow these steps:

  1. Download the quip folder from this repo and place it inside the repositories folder of the textgen-webui folder.
  2. install the requirements of quip#
  3. compile and install the quiptools cuda lib:
pip install fast-hadamard-transform glog==0.3.1 primefac==2.0.12
cd repositories/quip-sharp/quiptools
python setup.py install --force
  1. reinstall the requirements of textgen-webui
  2. load the model with the quip# integration of textgen-webui

You can use the library of this repo also for scripts. Within the quip# folder, after installing the library, use this command:

python interactive_gen.py --hf_path path_to_the_2bitmodel --max_length 500 text
Downloads last month
9
Safetensors
Model size
2.68B params
Tensor type
I64
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.