JosephusCheung
commited on
Commit
•
f557bed
1
Parent(s):
111768f
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- llama
|
8 |
- llama-2
|
9 |
---
|
10 |
-
[
|
11 |
|
12 |
This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
|
13 |
|
@@ -15,15 +15,19 @@ You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLa
|
|
15 |
|
16 |
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
|
17 |
|
18 |
-
SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very
|
19 |
|
20 |
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|
21 |
|
22 |
-
CURRENT MMLU:
|
23 |
|
24 |
-
|
|
|
|
|
25 |
|
26 |
-
|
|
|
|
|
27 |
|
28 |
这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
|
29 |
|
@@ -31,10 +35,14 @@ Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped s
|
|
31 |
|
32 |
模型已经被编辑实现白标化,不再自称通义千问。
|
33 |
|
34 |
-
剧透: 进一步的微调正在进行中,当前版本是一个正在进行的工作,一些知识可能由于结构变化而产生偏见和幻觉。
|
35 |
|
36 |
PROMPT 格式: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|
37 |
|
38 |
-
当前的 MMLU:
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
问题:相比原本的Qwen-Chat的53.
|
|
|
7 |
- llama
|
8 |
- llama-2
|
9 |
---
|
10 |
+
[Probeversion]
|
11 |
|
12 |
This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
|
13 |
|
|
|
15 |
|
16 |
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
|
17 |
|
18 |
+
SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very soooooooooon.
|
19 |
|
20 |
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|
21 |
|
22 |
+
CURRENT MMLU: 53.48
|
23 |
|
24 |
+
```
|
25 |
+
stem ACC: 46.40 Humanities ACC: 47.61 other ACC: 61.31 social ACC: 61.78 AVERAGE ACC:53.48
|
26 |
+
```
|
27 |
|
28 |
+
Issue: Compared to the original Qwen-7B-Chat scoring 53.90, the MMLU score dropped slightly (-0.42) due to insufficient realignment.
|
29 |
+
|
30 |
+
[预览版本]
|
31 |
|
32 |
这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
|
33 |
|
|
|
35 |
|
36 |
模型已经被编辑实现白标化,不再自称通义千问。
|
37 |
|
38 |
+
剧透: 进一步的微调正在进行中,当前版本是一个正在进行的工作,一些知识可能由于结构变化而产生偏见和幻觉。 会更新,很快,非常非常快。
|
39 |
|
40 |
PROMPT 格式: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|
41 |
|
42 |
+
当前的 MMLU: 53.48
|
43 |
+
|
44 |
+
```
|
45 |
+
stem ACC: 46.40 Humanities ACC: 47.61 other ACC: 61.31 social ACC: 61.78 AVERAGE ACC:53.48
|
46 |
+
```
|
47 |
|
48 |
+
问题:相比原本的 Qwen-7B-Chat 的 53.90,由于不够充分的重新对齐,MMLU分数略有下降(-0.42)。
|