Edit model card

Nous-Hermes-2-Mistral-7B-DPO-GGUF

Available Quants

  • IQ3_S
  • Q2_K
  • Q3_K_L
  • Q3_K_M
  • Q3_K_S
  • Q4_0
  • Q4_K_M
  • Q4_K_S
  • Q5_0
  • Q5_K_M
  • Q5_K_S
  • Q6_K
  • Q8_0
Downloads last month
436
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for QuantFactory/Nous-Hermes-2-Mistral-7B-DPO-GGUF

Quantized
(8)
this model