license: apache-2.0 | |
Unofficial GGUF Quantizations of Grok-1. Works with llama.cpp as of [PR- Add grok-1 support #6204](https://github.com/ggerganov/llama.cpp/pull/6204) | |
The splits now use [PR: llama_model_loader: support multiple split/shard GGUFs](https://github.com/ggerganov/llama.cpp/pull/6187). | |
Therefore, no merging using `gguf-split` is needed any more. | |
Q2_K, Q4_K and Q6_K are Uploaded. More will follow. All current Quants are made without any importance Matrix. | |