JosephusCheung commited on
Commit
fddc384
1 Parent(s): 69ead0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -12
README.md CHANGED
@@ -9,17 +9,7 @@ tags:
9
  license: gpl-3.0
10
  ---
11
 
12
- Given the discontinuation of the Qwen model, I will provisionally assign the license for this model as GPL-3.0. It should be noted that the weights and tokenizer utilized in this model diverge from those of the Qwen model. The inference code employed originates from Meta LLaMA / Hugging Face Transformers. The inclusion of "qwen" in the repository name bears no significance and any similarity to other entities or concepts is purely coincidental.
13
-
14
- Advance notice regarding the deletion of Qwen:
15
-
16
- **I remain unaware as to the reasons behind Qwen's deletion. Should this repository be found in violation of any terms stipulated by Qwen that necessitate its removal, I earnestly request you to establish contact with me. I pledge to expunge all references to Qwen and maintain the tokenizer and associated weights as an autonomous model, inherently distinct from Qwen. I will then proceed to christen this model with a new identifier.**
17
-
18
- 对于通义千问删除的事先说明:
19
-
20
- **我尚不清楚通义千问被删除的原因。如果此仓库违反了任何由通义千问提出的需要进行移除的条款,我真诚地请求您与我取得联系。我承诺将清除所有与通义千问/Qwen的相关引用,并将分词器和相关权重作为与通义千问本质上不同的独立模型进行维护。然后,我将为这个模型取一个新的名字。**
21
-
22
- This is the LLaMAfied replica of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
23
 
24
  You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (using GPT2Tokenizer converted from the original tiktoken, by [vonjack](https://huggingface.co/vonjack)).
25
 
@@ -42,7 +32,7 @@ CEval (val) - STEM acc: 45.28 Social Science acc: 66.19 Humanities acc: 58.76 Ot
42
  Issue: Compared to the original Qwen-7B-Chat scoring 53.90 in MMLU and 54.18 in CEval (val), the our scores dropped slightly [-0.42 in MMLU, -0.05 in CEval (val)] due to insufficient realignment.
43
 
44
 
45
- 这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
46
 
47
  您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(使用由 [vonjack](https://huggingface.co/vonjack) 从原始 tiktoken 转换而来的 GPT2Tokenizer 分词器)。
48
 
 
9
  license: gpl-3.0
10
  ---
11
 
12
+ This is the LLaMAfied replica of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) (Original Version before 25.09.2023), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
 
 
 
 
 
 
 
 
 
 
13
 
14
  You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (using GPT2Tokenizer converted from the original tiktoken, by [vonjack](https://huggingface.co/vonjack)).
15
 
 
32
  Issue: Compared to the original Qwen-7B-Chat scoring 53.90 in MMLU and 54.18 in CEval (val), the our scores dropped slightly [-0.42 in MMLU, -0.05 in CEval (val)] due to insufficient realignment.
33
 
34
 
35
+ 这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) (在 25.09.2023 之前的原始版本) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
36
 
37
  您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(使用由 [vonjack](https://huggingface.co/vonjack) 从原始 tiktoken 转换而来的 GPT2Tokenizer 分词器)。
38