File size: 1,611 Bytes
7ac635b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---

<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>

# Midnight-Miqu-70B-v1.0 - EXL2 5.0bpw

This is a 5.0bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0)

Details about the model and the merge info can be found at the above mode page.

I have not extensively tested this quant/model other than ensuring I could load it and chat with it.

## Model Loading

Below is what I used to run this model on a dual 3090 Linux server.

![image/jpg](Midnight-Miqu-70B-exl2-5-textgen.jpg)


I have not tested inference above a couple K tokens. If the model blows out past 8k, consider a 8192 context without cache_8bit set.

## Quant Details

This is the script used for quantization.

```bash
#!/bin/bash

# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2

# Define variables
MODEL_DIR="models/sophosympatheia_Midnight-Miqu-70B-v1.0"
OUTPUT_DIR="exl2_midnight70b"
MEASUREMENT_FILE="measurements/midnight70b.json"

BIT_PRECISION=5.0
CONVERTED_FOLDER="models/Midnight-Miqu-70B_exl2_5.0bpw"

# Create directories
mkdir $OUTPUT_DIR
mkdir $CONVERTED_FOLDER

# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER

```