Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,6 @@ tags:
|
|
6 |
|
7 |
## 更新履歴 update history
|
8 |
|
9 |
-
現在、差し替え作業中です!
|
10 |
-
|
11 |
2024/07/20
|
12 |
llama.cppに不具合[llama : fix pre-tokenization of non-special added tokens #8228](https://github.com/ggerganov/llama.cpp/pull/8228)が見つかり、Gemma2モデルは再変換が必要になり対応しました。HTMLタグの処理などが不正確になっていたとの事です。
|
13 |
A bug was found in llama.cpp [llama: fix pre-tokenization of non-special added tokens #8228](https://github.com/ggerganov/llama.cpp/pull/8228), and the Gemma2 model needed to be reconverted. The problem was that HTML tags were not being processed correctly.
|
@@ -21,8 +19,8 @@ Just to be on the safe side, I have uploaded both the 4-bit conventional convers
|
|
21 |
再変換時に、9b版のみ重要度行列(iMatrix)に日本語データを更に追加しています。
|
22 |
During reconversion, additional Japanese data was added to the importance matrix (iMatrix) for 9b only.
|
23 |
|
24 |
-
gemma-2-9b-itを日本語が多く含まれる重要度行列(iMatrix)を使って量子化したgguf版です。日本語対応能力が多めに保持されている事を期待していますが確かめる事はまだ出来ていません
|
25 |
-
This is a quantized gguf version of gemma-2-9b-it using an importance matrix (iMatrix) that contains many Japanese words. I hope it retains more Japanese support, but I can't be sure yet.
|
26 |
|
27 |
|
28 |
## 使い方(How to use.)
|
|
|
6 |
|
7 |
## 更新履歴 update history
|
8 |
|
|
|
|
|
9 |
2024/07/20
|
10 |
llama.cppに不具合[llama : fix pre-tokenization of non-special added tokens #8228](https://github.com/ggerganov/llama.cpp/pull/8228)が見つかり、Gemma2モデルは再変換が必要になり対応しました。HTMLタグの処理などが不正確になっていたとの事です。
|
11 |
A bug was found in llama.cpp [llama: fix pre-tokenization of non-special added tokens #8228](https://github.com/ggerganov/llama.cpp/pull/8228), and the Gemma2 model needed to be reconverted. The problem was that HTML tags were not being processed correctly.
|
|
|
19 |
再変換時に、9b版のみ重要度行列(iMatrix)に日本語データを更に追加しています。
|
20 |
During reconversion, additional Japanese data was added to the importance matrix (iMatrix) for 9b only.
|
21 |
|
22 |
+
gemma-2-9b-itを日本語が多く含まれる重要度行列(iMatrix)を使って量子化したgguf版です。日本語対応能力が多めに保持されている事を期待していますが確かめる事はまだ出来ていません
|
23 |
+
This is a quantized gguf version of gemma-2-9b-it using an importance matrix (iMatrix) that contains many Japanese words. I hope it retains more Japanese support, but I can't be sure yet.
|
24 |
|
25 |
|
26 |
## 使い方(How to use.)
|