codefuse-admin commited on
Commit
3b4b391
1 Parent(s): 74b089f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -27
README.md CHANGED
@@ -10,11 +10,6 @@ tasks:
10
 
11
  [[中文]](#chinese) [[English]](#english)
12
 
13
- #### Clone with HTTP
14
- ```bash
15
- git clone https://www.modelscope.cn/codefuse-ai/CodeFuse-QWen-14B.git
16
- ```
17
-
18
  <a id="english"></a>
19
 
20
  ## Model Description
@@ -29,9 +24,9 @@ CodeFuse-QWen-14B is a 14B Code-LLM finetuned by QLoRA of multiple code tasks on
29
 
30
  🔥🔥 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
31
 
32
- 🔥🔥🔥 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B-4bits/summary) of [CodeFuse-CodeLlama-34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
33
 
34
- 🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary) has achived 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
35
 
36
  <br>
37
 
@@ -98,20 +93,17 @@ Bot 2nd round output<|endoftext|>
98
  ...
99
  ...
100
  <s>human
101
- Human nth round input
102
  <s>bot
103
  {Bot output to be genreated}<|endoftext|>
104
  """
105
  ```
106
 
107
- When applying inference, you always make your input string end with "\<s\>bot" to ask the model generating answers.
108
 
109
 
110
  ## Quickstart
111
 
112
- ```bash
113
- git clone https://www.modelscope.cn/codefuse-ai/CodeFuse-QWen-14B.git
114
- ```
115
 
116
  ```bash
117
  pip install -r requirements.txt
@@ -119,13 +111,11 @@ pip install -r requirements.txt
119
 
120
  ```python
121
  import torch
122
- from modelscope import (
123
  AutoTokenizer,
124
- AutoModelForCausalLM,
125
- snapshot_download
126
  )
127
- model_dir = snapshot_download('codefuse-ai/CodeFuse-QWen-14B',revision = 'v1.0.0')
128
- tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
129
  tokenizer.padding_side = "left"
130
  tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
131
  tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
@@ -178,9 +168,9 @@ CodeFuse-QWen-14B 是一个通过QLoRA对基座模型QWen-14B进行多代码任
178
 
179
  🔥🔥 2023-09-27开源了CodeFuse-StarCoder-15B模型,在HumanEval pass@1(greedy decoding)上可以达到54.9%, 比StarCoder提高了21%的代码能力(HumanEval)
180
 
181
- 🔥🔥🔥 2023-09-26 [CodeFuse-CodeLlama-34B 4bits](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B-4bits/summary)量化版本发布,量化后模型在HumanEval pass@1指标为73.8% (贪婪解码)。
182
 
183
- 🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama-34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary)发布,HumanEval pass@1指标达到74.4% (贪婪解码), 为当前开源SOTA。
184
 
185
  <br>
186
 
@@ -255,9 +245,6 @@ CodeFuse-QWen-14B 是一个通过QLoRA对基座模型QWen-14B进行多代码任
255
 
256
  ## 快速使用
257
 
258
- ```bash
259
- git clone https://www.modelscope.cn/codefuse-ai/CodeFuse-QWen-14B.git
260
- ```
261
 
262
  ```bash
263
  pip install -r requirements.txt
@@ -265,13 +252,11 @@ pip install -r requirements.txt
265
 
266
  ```python
267
  import torch
268
- from modelscope import (
269
  AutoTokenizer,
270
- AutoModelForCausalLM,
271
- snapshot_download
272
  )
273
- model_dir = snapshot_download('codefuse-ai/CodeFuse-QWen-14B',revision = 'v1.0.0')
274
- tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
275
  tokenizer.padding_side = "left"
276
  tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
277
  tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
 
10
 
11
  [[中文]](#chinese) [[English]](#english)
12
 
 
 
 
 
 
13
  <a id="english"></a>
14
 
15
  ## Model Description
 
24
 
25
  🔥🔥 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
26
 
27
+ 🔥🔥🔥 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of [CodeFuse-CodeLlama-34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
28
 
29
+ 🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B) has achived 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
30
 
31
  <br>
32
 
 
93
  ...
94
  ...
95
  <s>human
96
+ Human n-th round input
97
  <s>bot
98
  {Bot output to be genreated}<|endoftext|>
99
  """
100
  ```
101
 
102
+ When applying inference, you always make your input string end with "\<s\>bot" to ask the model to generate answers.
103
 
104
 
105
  ## Quickstart
106
 
 
 
 
107
 
108
  ```bash
109
  pip install -r requirements.txt
 
111
 
112
  ```python
113
  import torch
114
+ from transformers import (
115
  AutoTokenizer,
116
+ AutoModelForCausalLM
 
117
  )
118
+ tokenizer = AutoTokenizer.from_pretrained('codefuse-ai/CodeFuse-QWen-14B', trust_remote_code=True)
 
119
  tokenizer.padding_side = "left"
120
  tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
121
  tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
 
168
 
169
  🔥🔥 2023-09-27开源了CodeFuse-StarCoder-15B模型,在HumanEval pass@1(greedy decoding)上可以达到54.9%, 比StarCoder提高了21%的代码能力(HumanEval)
170
 
171
+ 🔥🔥🔥 2023-09-26 [CodeFuse-CodeLlama-34B 4bits](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits)量化版本发布,量化后模型在HumanEval pass@1指标为73.8% (贪婪解码)。
172
 
173
+ 🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)发布,HumanEval pass@1指标达到74.4% (贪婪解码), 为当前开源SOTA。
174
 
175
  <br>
176
 
 
245
 
246
  ## 快速使用
247
 
 
 
 
248
 
249
  ```bash
250
  pip install -r requirements.txt
 
252
 
253
  ```python
254
  import torch
255
+ from transformers import (
256
  AutoTokenizer,
257
+ AutoModelForCausalLM
 
258
  )
259
+ tokenizer = AutoTokenizer.from_pretrained('codefuse-ai/CodeFuse-QWen-14B', trust_remote_code=True)
 
260
  tokenizer.padding_side = "left"
261
  tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
262
  tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")