Edit model card

QuantFactory Banner

QuantFactory/Llama-3.2-3B-Instruct-abliterated-GGUF

This is quantized version of huihui-ai/Llama-3.2-3B-Instruct-abliterated created using llama.cpp

Original Model Card

๐Ÿฆ™ Llama-3.2-3B-Instruct-abliterated

This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).

Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.

Evaluations

The following data has been re-evaluated and calculated as the average for each test.

Benchmark Llama-3.2-3B-Instruct Llama-3.2-3B-Instruct-abliterated
IF_Eval 76.55 76.76
MMLU Pro 27.88 28.00
TruthfulQA 50.55 50.73
BBH 41.81 41.86
GPQA 28.39 28.41

The script used for evaluation can be found inside this repository under /eval.sh, or click here

Downloads last month
656
GGUF
Model size
3.21B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for QuantFactory/Llama-3.2-3B-Instruct-abliterated-GGUF

Quantized
(75)
this model