bartowski's picture
Quant for 4.0
cb4a855
|
raw
history blame
1.27 kB
metadata
license: apache-2.0
tags:
  - openchat
  - mistral
  - C-RLFT
datasets:
  - openchat/openchat_sharegpt4_dataset
  - imone/OpenOrca_FLAN
  - LDJnr/LessWrong-Amplify-Instruct
  - LDJnr/Pure-Dove
  - LDJnr/Verified-Camel
  - tiedong/goat
  - glaiveai/glaive-code-assistant
  - meta-math/MetaMathQA
  - OpenAssistant/oasst_top1_2023-08-25
  - TIGER-Lab/MathInstruct
library_name: transformers
pipeline_tag: text-generation
quantized_by: bartowski

Exllama v2 Quantizations of openchat-3.5-1210 at 4.0 bits per weight

Using turboderp's ExLlamaV2 v0.0.10 for quantization.

Conversion was done using VMWareOpenInstruct.parquet as calibration dataset.

Original model: https://huggingface.co/openchat/openchat-3.5-1210

Download instructions

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/openchat-3.5-1210-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download from a different branch, add the --revision parameter:

mkdir openchat-3.5-1210-exl2
huggingface-cli download bartowski/openchat-3.5-1210-exl2 --revision 4_0 --local-dir openchat-3.5-1210-exl2 --local-dir-use-symlinks False