Grok-1-GGUF / README.md
Arki05's picture
Update README.md
05cdb7a verified
|
raw
history blame
486 Bytes
metadata
license: apache-2.0

Unofficial GGUF Quantizations of Grok-1. Works with llama.cpp as of PR- Add grok-1 support #6204

The splits now use PR: llama_model_loader: support multiple split/shard GGUFs. Therefore, no merging using gguf-split is needed any more.

Q2_K, Q4_K and Q6_K are Uploaded. More will follow. All current Quants are made without any importance Matrix.