Upload README.md
Browse files
README.md
CHANGED
@@ -142,7 +142,7 @@ The following clients/libraries will automatically download models for you, prov
|
|
142 |
|
143 |
### In `text-generation-webui`
|
144 |
|
145 |
-
Under Download Model, you can enter the model repo: TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF and below it, a specific filename to download, such as: WizardLM-Uncensored-SuperCOT-Storytelling.
|
146 |
|
147 |
Then click Download.
|
148 |
|
@@ -151,13 +151,13 @@ Then click Download.
|
|
151 |
I recommend using the `huggingface-hub` Python library:
|
152 |
|
153 |
```shell
|
154 |
-
pip3 install huggingface-hub
|
155 |
```
|
156 |
|
157 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
158 |
|
159 |
```shell
|
160 |
-
huggingface-cli download TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF WizardLM-Uncensored-SuperCOT-Storytelling.
|
161 |
```
|
162 |
|
163 |
<details>
|
@@ -180,7 +180,7 @@ pip3 install hf_transfer
|
|
180 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
181 |
|
182 |
```shell
|
183 |
-
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF WizardLM-Uncensored-SuperCOT-Storytelling.
|
184 |
```
|
185 |
|
186 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
@@ -193,7 +193,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
|
|
193 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
194 |
|
195 |
```shell
|
196 |
-
./main -ngl 32 -m WizardLM-Uncensored-SuperCOT-Storytelling.
|
197 |
```
|
198 |
|
199 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
@@ -233,7 +233,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
|
233 |
from ctransformers import AutoModelForCausalLM
|
234 |
|
235 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
236 |
-
llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF", model_file="WizardLM-Uncensored-SuperCOT-Storytelling.
|
237 |
|
238 |
print(llm("AI is going to"))
|
239 |
```
|
|
|
142 |
|
143 |
### In `text-generation-webui`
|
144 |
|
145 |
+
Under Download Model, you can enter the model repo: TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF and below it, a specific filename to download, such as: WizardLM-Uncensored-SuperCOT-Storytelling.Q4_K_M.gguf.
|
146 |
|
147 |
Then click Download.
|
148 |
|
|
|
151 |
I recommend using the `huggingface-hub` Python library:
|
152 |
|
153 |
```shell
|
154 |
+
pip3 install huggingface-hub
|
155 |
```
|
156 |
|
157 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
158 |
|
159 |
```shell
|
160 |
+
huggingface-cli download TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF WizardLM-Uncensored-SuperCOT-Storytelling.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
161 |
```
|
162 |
|
163 |
<details>
|
|
|
180 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
181 |
|
182 |
```shell
|
183 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF WizardLM-Uncensored-SuperCOT-Storytelling.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
184 |
```
|
185 |
|
186 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
|
|
193 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
194 |
|
195 |
```shell
|
196 |
+
./main -ngl 32 -m WizardLM-Uncensored-SuperCOT-Storytelling.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
|
197 |
```
|
198 |
|
199 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
|
|
233 |
from ctransformers import AutoModelForCausalLM
|
234 |
|
235 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
236 |
+
llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF", model_file="WizardLM-Uncensored-SuperCOT-Storytelling.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
237 |
|
238 |
print(llm("AI is going to"))
|
239 |
```
|