pmysl's picture
Update README.md
6c854e5
|
raw
history blame
441 Bytes
---
license: cc-by-nc-4.0
---
# Command R+ GGUF
## Description
This repository contains GGUF weights for the `llama.cpp`
## Concatenating Weights
For every variant (except Q2_K), you must concatenate the weights, as they exceed the 50 GB single file size limit on HuggingFace. You can accomplish this using the `cat` command on Linux (example for the Q3 variant):
```bash
cat command-r-plus-Q3_K_L.gguf.* > command-r-plus-Q3_K_L.gguf
```