Edit model card

Beepo-22B-GGUF

This is the MLX 4bit quantization of https://huggingface.co/concedo/Beepo-22B, which was originally finetuned on top of the https://huggingface.co/mistralai/Mistral-Small-Instruct-2409 model.

You can use LMStudio to run this model.

image/png

Key Features:

  • Retains Intelligence - LR was kept low and dataset heavily pruned to avoid losing too much of the original model's intelligence.
  • Instruct prompt format supports Alpaca - Honestly, I don't know why more models don't use it. If you are an Alpaca format lover like me, this should help. The original Mistral instruct format can still be used, but is not recommended.
  • Instruct Decensoring Applied - You should not need a jailbreak for a model to obey the user. The model should always do what you tell it to. No need for weird "Sure, I will" or kitten-murdering-threat tricks. No abliteration was done, only finetuning. This model is not evil. It does not judge or moralize. Like a good tool, it simply obeys.

Prompt template: Alpaca

### Instruction:
{prompt}

### Response:

Please leave any feedback or issues that you may have.

Downloads last month
2
Safetensors
Model size
3.48B params
Tensor type
FP16
·
U32
·
Inference Examples
Inference API (serverless) does not yet support mlx models for this pipeline type.

Model tree for NimbleAINinja/Beepo-22B-mlx-4bit

Finetuned
concedo/Beepo-22B
Quantized
(11)
this model