Text Generation
Transformers
English
llama
TheBloke commited on
Commit
4788d8a
1 Parent(s): 8d7007e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -127,7 +127,7 @@ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6
127
  For compatibility with latest llama.cpp, please use GGUF files instead.
128
 
129
  ```
130
- ./main -t 10 -ngl 32 -m llongorca-7b-16k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\nYou are a story writing assistant.<|im_end|>\n<|im_start|>user\nWrite a story about llamas<|im_end|>\n<|im_start|>assistant"
131
  ```
132
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
133
 
 
127
  For compatibility with latest llama.cpp, please use GGUF files instead.
128
 
129
  ```
130
+ ./main -t 10 -ngl 32 -m llongorca-7b-16k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
131
  ```
132
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
133