Text Generation
PyTorch
causal-lm
rwkv
BlinkDL commited on
Commit
a6b6a11
1 Parent(s): 1067d68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -36,12 +36,12 @@ How to use:
36
 
37
  The difference between World & Raven:
38
  * set pipeline = PIPELINE(model, "rwkv_vocab_v20230424") instead of 20B_tokenizer.json (EXACTLY AS WRITTEN HERE. "rwkv_vocab_v20230424" is included in rwkv 0.7.4+)
39
- * use Question/Answer or User/AI or Human/Bot prompt for Q&A. **DO NOT USE Bob/Alice or Q/A**
40
  * use **fp32** (will overflow in fp16 at this moment - fixable in future) or bf16 (slight degradation)
41
 
42
  NOTE: the new greedy tokenizer (https://github.com/BlinkDL/ChatRWKV/blob/main/tokenizer/rwkv_tokenizer.py) will tokenize '\n\n' as one single token instead of ['\n','\n']
43
 
44
- prompt (replace \n\n in xxx to \n):
45
  ```
46
  Instruction: xxx
47
  Input: xxx
 
36
 
37
  The difference between World & Raven:
38
  * set pipeline = PIPELINE(model, "rwkv_vocab_v20230424") instead of 20B_tokenizer.json (EXACTLY AS WRITTEN HERE. "rwkv_vocab_v20230424" is included in rwkv 0.7.4+)
39
+ * use Question/Answer or User/AI or Human/Bot for chat. **DO NOT USE Bob/Alice or Q/A**
40
  * use **fp32** (will overflow in fp16 at this moment - fixable in future) or bf16 (slight degradation)
41
 
42
  NOTE: the new greedy tokenizer (https://github.com/BlinkDL/ChatRWKV/blob/main/tokenizer/rwkv_tokenizer.py) will tokenize '\n\n' as one single token instead of ['\n','\n']
43
 
44
+ QA prompt (replace \n\n in xxx to \n):
45
  ```
46
  Instruction: xxx
47
  Input: xxx