Update README.md
Browse files
README.md
CHANGED
@@ -265,7 +265,7 @@ pip install -U "huggingface_hub[cli]"
|
|
265 |
Then, you can target the specific file you want:
|
266 |
|
267 |
```
|
268 |
-
huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q4_K_M.gguf"
|
269 |
```
|
270 |
|
271 |
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
@@ -274,6 +274,8 @@ If the model is bigger than 50GB, it will have been split into multiple files. I
|
|
274 |
huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q8_0.gguf/*" --local-dir Meta-Llama-3-70B-Instruct-Q8_0 --local-dir-use-symlinks False
|
275 |
```
|
276 |
|
|
|
|
|
277 |
## Which file should I choose?
|
278 |
|
279 |
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
|
|
265 |
Then, you can target the specific file you want:
|
266 |
|
267 |
```
|
268 |
+
huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
|
269 |
```
|
270 |
|
271 |
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
|
|
274 |
huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q8_0.gguf/*" --local-dir Meta-Llama-3-70B-Instruct-Q8_0 --local-dir-use-symlinks False
|
275 |
```
|
276 |
|
277 |
+
You can either specify a new local-dir (Meta-Llama-3-70B-Instruct-Q8_0) or download them all in place (./)
|
278 |
+
|
279 |
## Which file should I choose?
|
280 |
|
281 |
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|