File size: 1,332 Bytes
66653c9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# Midnight-Miqu-70B-v1.5 - EXL2 5.0bpw
This is a 5.0bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
Details about the model and the merge info can be found at the above mode page.
I have not extensively tested this quant/model other than ensuring I could load it and chat with it.
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Define variables
MODEL_DIR="models/Midnight-Miqu-70B-v1.5"
OUTPUT_DIR="exl2_midnightv15-70b"
MEASUREMENT_FILE="measurements/midnight70b-v15.json"
BIT_PRECISION=5.0
CONVERTED_FOLDER="models/Midnight-Miqu-70B-v1.5_exl2_5.0bpw"
# Create directories
mkdir $OUTPUT_DIR
mkdir $CONVERTED_FOLDER
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
```
|