This is a 2-bit quantization of @migtissera Tess-M-34b-v1.4 using quip# (https://cornell-relaxml.github.io/quip-sharp/) with hessian context lenght 8k.
"Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-M-v1.4 was trained on the Yi-34B-200K base."
Perplexity on the dev set as repoted by quip# was slightly below 7 compared to slightly below 6 of the original. Inference with the model is a bit slow, but with the long context length it should provide one of the best performing few-shot models for consumer and data science GPUs, especially if the instances are longer and the answers relatively short.
Prompt Format:
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
I am able to use this model in the widely known textgen-webui. For installation I suggest to follow these steps:
- Put the current quip# library folder into the repositories folder of the textgen-webui folder.
- install the requirements of quip#
- compile and install the quiptools cuda lib:
pip install fast-hadamard-transform glog==0.3.1 primefac==2.0.12
cd repositories/quip-sharp/quiptools
python setup.py install --force
- reinstall the requirements of textgen-webui
- load the model with the quip# integration of textgen-webui
You can use the library of this repo also for scripts. Within the quip# folder, after installing the library, use this command:
python interactive_gen.py --hf_path path_to_the_2bitmodel --max_length 500
License The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the Model License Agreement 2.0. To apply for the official commercial license, please contact us ([email protected]).
- Downloads last month
- 53