miquella-120b-gguf / README.md
alpindale's picture
Update README.md
1fd06e2 verified
---
tags:
- merge
---
# Miquella 120B GGUF
GGUF quantized weights for [miquella-120b](https://huggingface.co/alpindale/miquella-120b). Contains *all* quants.
I used Importance Matrices generated from Q8_0 quant of the model. The dataset used for that was random junk
for optimal quality.
Due to the limitations of HF's file size, the larger files were split into multiple chunks. Instructions below.
## Linux
Example uses Q3_K_L. Replace the names appropriately for your quant of choice.
```sh
cat miquella-120b.Q3_K_L.gguf_part_* > miquella-120b.Q3_K_L.gguf && rm miquella-120b.Q3_K_L.gguf_part_*
```
## Windows
Example uses Q3_K_L. Replace the names appropriately for your quant of choice.
```sh
COPY /B miquella-120b.Q3_K_L.gguf_part_aa + miquella-120b.Q3_K_L.gguf_part_ab miquella-120b.Q3_K_L.gguf
```
Then delete the two splits.