ymcki commited on
Commit
528f049
1 Parent(s): 32a17dc

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -34,14 +34,14 @@ Run them in [LM Studio](https://lmstudio.ai/)
34
  | -------- | ---------- | --------- | ----- | --------------- | ----------- |
35
  | [gemma-2-2b-jpn-it.f16.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.f16.gguf) | f16 | 5.24GB | false | Full F16 weights. |
36
  | [gemma-2-2b-jpn-it.Q8_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q8_0.gguf) | Q8_0 | 2.78GB | false | Extremely high quality, *recommended*. |
37
- | [gemma-2-2b-jpn-it-imatrix.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0.gguf) | Q4_0 | 2.78GB | false | Good quality, *recommended for edge device <8GB RAM*. |
38
- | [gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf) | Q4_0_8_8 | 2.78GB | false | Good quality, *recommended for edge device <8GB RAM*. |
39
- | [gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf) | Q4_0_4_8 | 2.78GB | false | Good quality, *recommended for edge device <8GB RAM*. |
40
- | [gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf) | Q4_0_4_4 | 2.78GB | false | Good quality, *recommended for edge device <8GB RAM*. |
41
- | [gemma-2-2b-jpn-it.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0.gguf) | Q4_0 | 2.78GB | false | Poor quality, *not recommended*. |
42
- | [gemma-2-2b-jpn-it.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_8_8.gguf) | Q4_0_8_8 | 2.78GB | false | Poor quality, *not recommended*. |
43
- | [gemma-2-2b-jpn-it.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_8.gguf) | Q4_0_4_8 | 2.78GB | false | Poor quality, *not recommended*. |
44
- | [gemma-2-2b-jpn-it.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_4.gguf) | Q4_0_4_4 | 2.78GB | false | Poor quality, *not recommended*. |
45
 
46
  ## How to check i8mm and sve support for ARM devices
47
 
@@ -67,8 +67,8 @@ There are also android apps that can display /proc/cpuinfo.
67
  ## Which Q4_0 model to use for ARM devices
68
  | Brand | Series | Model | i8mm | sve | Quant Type |
69
  | ----- | ------ | ----- | ---- | --- | -----------|
70
- | Qualcomm|Snapdragon | >= 7 Gen 1 | Yes | Yes | Q4_0_8_8 |
71
- | Qualcomm|Snapdragon | others | No | No | Q4_0_4_4 |
72
  | Apple | M | M1 | No | No | Q4_0_4_4 |
73
  | Apple | M | M2/M3/M4 | Yes | No | Q4_0_4_8 |
74
  | Apple | A | A4 to A14 | No | No | Q4_0_4_4 |
 
34
  | -------- | ---------- | --------- | ----- | --------------- | ----------- |
35
  | [gemma-2-2b-jpn-it.f16.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.f16.gguf) | f16 | 5.24GB | false | Full F16 weights. |
36
  | [gemma-2-2b-jpn-it.Q8_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q8_0.gguf) | Q8_0 | 2.78GB | false | Extremely high quality, *recommended*. |
37
+ | [gemma-2-2b-jpn-it-imatrix.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0.gguf) | Q4_0 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
38
+ | [gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
39
+ | [gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
40
+ | [gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
41
+ | [gemma-2-2b-jpn-it.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0.gguf) | Q4_0 | 1.63GB | false | Poor quality, *not recommended*. |
42
+ | [gemma-2-2b-jpn-it.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | false | Poor quality, *not recommended*. |
43
+ | [gemma-2-2b-jpn-it.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | false | Poor quality, *not recommended*. |
44
+ | [gemma-2-2b-jpn-it.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | false | Poor quality, *not recommended*. |
45
 
46
  ## How to check i8mm and sve support for ARM devices
47
 
 
67
  ## Which Q4_0 model to use for ARM devices
68
  | Brand | Series | Model | i8mm | sve | Quant Type |
69
  | ----- | ------ | ----- | ---- | --- | -----------|
70
+ | Qualcomm |Snapdragon | >= 7 Gen 1 | Yes | Yes | Q4_0_8_8 |
71
+ | Qualcomm |Snapdragon | others | No | No | Q4_0_4_4 |
72
  | Apple | M | M1 | No | No | Q4_0_4_4 |
73
  | Apple | M | M2/M3/M4 | Yes | No | Q4_0_4_8 |
74
  | Apple | A | A4 to A14 | No | No | Q4_0_4_4 |