File size: 7,348 Bytes
fd06b7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
783907f
 
 
 
 
fd06b7b
783907f
 
 
fd06b7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
783907f
fd06b7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
base_model: google/gemma-2-9b-it
language:
- multilingual
datasets:
  - TFMC/imatrix-dataset-for-japanese-llm
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
tags:
- nlp
- code
quantized_by: ymcki
widget:
- messages:
  - role: user
    content: Can you provide ways to eat combinations of bananas and dragonfruits?
---

Original model: https://huggingface.co/google/gemma-2-9b-it

## Description

The purpose of this repository is to see whether Japanese specific
imatrix can improve the performance of a non Japanese optimized model.

It also provides the Q4_0_8_8, Q4_0_4_8 and Q4_0_4_4 ggufs for edge
devices that were otherwise not made by bartowski. These models should
also be good for edge devices with 16GB RAM.

## Prompt format

```
<start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model

```

Note that this model does not support a System prompt.

## Download a file (not the whole branch) from below:

ELIZA-Tasks-100 is pretty standard benchmark for Japanese LLMs. 
The perfect score is 5.00. As a reference, bartowski's gemma-2-27b-it.Q6_K.gguf scores 4.04. 

| Filename | Quant type | File Size | Split | ELIZA-Tasks-100 | Nvidia 3090 | Description |
| -------- | ---------- | --------- | ----- | --------------- | ----------- | ----------- |
| [gemma-2-9b-it.f16.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it.f16.gguf) | f16 | 18.49GB | false | 3.75 | 31.9t/s | Full F16 weights. |
| [gemma-2-9b-it.Q8_0.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q8_0.gguf) | Q8_0 | 9.83GB | false | 3.66 | 56.1t/s | Extremely high quality, *recommended for edge devices with 16GB RAM*. |
| [gemma-2-9b-it-imatrix.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it-imatrix.Q4_0.gguf) | Q4_0 | 5.44GB | false | 3.76 | 80.6t/s | Good quality, *recommended for edge devices wth 8GB RAM*. |
| [gemma-2-9b-it-imatrix.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | false | TBD | TBD | Good quality, *recommended for edge device <8GB RAM*. |
| [gemma-2-9b-it-imatrix.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | false | TBD | TBD | Good quality, *recommended for edge device <8GB RAM*. |
| [gemma-2-9b-it-imatrix.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | false | TBD | TBD | Good quality, *recommended for edge device <8GB RAM*. |
| [gemma-2-9b-it.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it.Q4_0.gguf) | Q4_0 | 5.44GB | false | 3.64 | 65.1t/s | Good quality, *recommended for edge device with 8GB RAM* |
| [gemma-2-9b-it.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | false | TBD | TBD | Good quality, *recommended for edge device <8GB RAM* |
| [gemma-2-9b-it.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | false | TBD | TBD | Good quality, *recommended for edge device <8GB RAM* |
| [gemma-2-9b-it.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | false | TBD | TBD | Good quality, *recommended for edge device <8GB RAM*. |

## How to check i8mm and sve support for ARM devices

ARM i8mm support is necessary to take advantage of Q4_0_4_8 gguf. All ARM architecture >= ARMv8.6-A supports i8mm.

ARM sve support is necessary to take advantage of Q4_0_8_8 gguf. sve is an optional feature that starts from ARMv8.2-A but majority of ARM chips doesn't implement it.

For ARM devices without both, it is recommended to use Q4_0_4_4.

With these support, the inference speed should be faster in the order of Q4_0_8_8 > Q4_0_4_8 > Q4_0_4_4 > Q4_0 without much effect on the quality of response.

This is a [list](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) of ARM devices that support different ARM instructions. Apparently, it is only a partial list. It is better you check for i8mm and sve support by yourself.

For Apple devices, 

```
sysctl hw
```

For other ARM devices (ie most Android devices),
```
cat /proc/cpuinfo
```

There are also android apps that can display /proc/cpuinfo.

I was told that for Intel/AMD CPU inference, support for AVX2/AVX512 can also improve the performance of Q4_0_8_8. 

On the other hand, Nvidia 3090 inference speed is significantly faster for Q4_0 than the other ggufs. That means for GPU inference, you better off using Q4_0. 

## Which Q4_0 model to use for ARM devices
| Brand | Series | Model | i8mm | sve | Quant Type |
| ----- | ------ | ----- | ---- | --- | -----------|
| Apple | A | A4 to A14 | No | No | Q4_0_4_4 |
| Apple | A | A15 to A18 | Yes | No | Q4_0_4_8 |
| Apple | M | M1 | No | No | Q4_0_4_4 |
| Apple | M | M2/M3/M4 | Yes | No | Q4_0_4_8 |
| Google | Tensor | G1,G2 | No | No | Q4_0_4_4 |
| Google | Tensor | G3,G4 | Yes | Yes | Q4_0_8_8 |
| Samsung | Exynos | 2200,2400 | Yes | Yes | Q4_0_8_8 |
| Mediatek | Dimensity | 9000 | Yes | Yes | Q4_0_8_8 |
| Mediatek | Dimensity | 9300 | Yes | No | Q4_0_4_8 |
| Qualcomm | Snapdragon | 8 Gen 1 | Yes | Yes | Q4_0_8_8 |
| Qualcomm | Snapdragon | 8 Gen 2,8 Gen 3,X Elite | Yes | No | Q4_0_4_8 |

## imatrix quantization

According to this [blog](https://sc-bakushu.hatenablog.com/entry/2024/04/20/050213), adding imatrix to low bit quant can significantly improve performance. The best dataset for Japanese is [MTFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm). Therefore, I also created the imatrix versions of different Q4_0 quants. 

However, based on my benchmarking results, it seems like imatrix does improve the performance of a non-Japanese optimized model.

## Convert safetensors to f16 gguf

Make sure you have llama.cpp git cloned:

```
python3 convert_hf_to_gguf.py gemma-2-2b-jpn-it/ --outfile gemma-2-2b-jpn-it.f16.gguf --outtype f16
```

## Convert f16 gguf to Q8_0 gguf without imatrix
Make sure you have llama.cpp compiled:
```
./llama-quantize gemma-2-2b-jpn-it.f16.gguf gemma-2-2b-jpn-it.Q8_0.gguf q8_0
```

## Convert f16 gguf to other ggufs with imatrix

First, prepare imatrix from f16 gguf and c4_en_ja_imatrix.txt

```
./llama-imatrix -m gemma-2-2b-jpn-it.f16.gguf -f c4_en_ja_imatrix.txt -o gemma-2-2b-jpn-it.imatrix --chunks 32
```

Then, convert f16 gguf with imatrix to create imatrix gguf

```
./llama-quantize --imatrix gemma-2-2b-jpn-it.imatrix gemma-2-2b-jpn-it.f16.gguf gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf q4_0_8_8
```

## Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

```
pip install -U "huggingface_hub[cli]"
```

Then, you can target the specific file you want:

```
huggingface-cli download ymcki/gemma-2-2b-jpn-it-GGUF --include "gemma-2-2b-jpn-it-Q8_0.gguf" --local-dir ./
```

## Credits

Thank you bartowski for providing a README.md to get me started.

Thank you YoutechA320U for the ELYZA-tasks-100 auto evaluation tool.