Update README.md
Browse files
README.md
CHANGED
@@ -254,6 +254,26 @@ All quants made using imatrix option with dataset provided by Kalomaze [here](ht
|
|
254 |
| [Meta-Llama-3-70B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
|
255 |
| [Meta-Llama-3-70B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. |
|
256 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
257 |
## Which file should I choose?
|
258 |
|
259 |
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
|
|
254 |
| [Meta-Llama-3-70B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
|
255 |
| [Meta-Llama-3-70B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. |
|
256 |
|
257 |
+
## Downloading using huggingface-cli
|
258 |
+
|
259 |
+
First, make sure you have hugginface-cli installed:
|
260 |
+
|
261 |
+
```
|
262 |
+
pip install -U "huggingface_hub[cli]"
|
263 |
+
```
|
264 |
+
|
265 |
+
Then, you can target the specific file you want:
|
266 |
+
|
267 |
+
```
|
268 |
+
huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q4_K_M.gguf" Meta-Llama-3-70B-Instruct-Q4_K_M.gguf --local-dir-use-symlinks False
|
269 |
+
```
|
270 |
+
|
271 |
+
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
272 |
+
|
273 |
+
```
|
274 |
+
huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q8_0.gguf/*" Meta-Llama-3-70B-Instruct-Q8_0.gguf --local-dir-use-symlinks False
|
275 |
+
```
|
276 |
+
|
277 |
## Which file should I choose?
|
278 |
|
279 |
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|