license: llama2 | |
language: | |
- en | |
This is an ExLlamaV2 quantized model in 3bpw of [jebcarter/Psyfighter-13B](https://huggingface.co/jebcarter/Psyfighter-13B) using [PIPPA](https://huggingface.co/datasets/jasonkstevens/pippa-llama2-chat/blob/refs%2Fconvert%2Fparquet/default/train/0000.parquet) as the calibration dataset. (I might reupload this or add a new branch with a different version that uses the default calibration dataset instead) | |
Alpaca and ChatML prompt templates seem to work fine with this model. | |
# Original Model card | |
``` | |
merge_method: task_arithmaetic | |
base_model: TheBloke/Llama-2-13B-fp16 | |
models: | |
- model: TheBloke/Llama-2-13B-fp16 | |
- model: KoboldAI/LLaMA2-13B-Tiefighter | |
parameters: | |
weight: 1.0 | |
- model: chaoyi-wu/MedLLaMA_13B | |
parameters: | |
weight: 0.01 | |
- model: Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged | |
parameters: | |
weight: 0.02 | |
dtype: float16 | |
``` | |
This model was made possible thanks to the Compute provided by the KoboldAI community. |