JosephusCheung
commited on
Commit
•
da2cadc
1
Parent(s):
5c088fa
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[WIP]
|
2 |
+
|
3 |
+
This is the LLaMAfied version of Qwen/Qwen-7B-Chat, recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
|
4 |
+
|
5 |
+
You can use LlamaCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (the tokenizer remains the same, so you still need to allow external codes when loading, eg: AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)).
|
6 |
+
|
7 |
+
SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very sooooooooooon.
|
8 |
+
|
9 |
+
[在制品]
|
10 |
+
|
11 |
+
这是 Qwen/Qwen-7B-Chat 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
|
12 |
+
|
13 |
+
您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(分词器保持不变,因此加载时仍然需要允许外部代码,例如:AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True))。
|
14 |
+
|
15 |
+
剧透: 进一步的微调正在进行中,当前版本是一个正在进行的工作,一些知识可能由于结构变化而产生偏见和幻觉。 会更新,很快,非常非常非常快。
|