Zephyr 7B Beta - GGUF
- Model creator: Hugging Face H4
- Original model: Zephyr 7B Beta
Description
This repo contains GGUF format model files for Hugging Face H4's Zephyr 7B Beta.
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Prompt template: Zephyr
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
Explanation of quantisation methods
Click to see details
The GGUF model is pruned to 50% using sparseGPT method sparseGPT
from llama_cpp import Llama
llm = Llama(model_path="zephyr-7b-beta-pruned50-Q8_0.gguf")
output = llm("""
<|system|>
You are a friendly chatbot who always responds in the style of a pirate.</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>""")
print(output)
#
- Downloads last month
- 10
Model tree for Semantically-AI/zephyr-7b-beta-pruned50-GGUF
Base model
mistralai/Mistral-7B-v0.1
Finetuned
HuggingFaceH4/zephyr-7b-beta