--- language: - en - et --- # 4-bit Llammas in gguf This is a 4-bit quantized version of [TartuNLP/Llammas](https://huggingface.co/tartuNLP/Llammas) Llama2 model in gguf file format.