File size: 2,072 Bytes
3732a11
0da3c68
 
 
 
 
3732a11
 
95881cb
 
 
3732a11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
base_model:
- mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
- grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
- agentlans/Llama3-vodka
- NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: transformers
tags:
- llama
- llama-3
- uncensored
- mergekit
- merge

---

# Llama3.1-vodka

- Input: text only
- Output: text only

This model is like vodka. It aims to be pure, potent, and versatile.

- Pure: shouldn't greatly affect Llama 3.1 Instruct's capabilities and writing style except for uncensoring
- Potent: it's a merge of abliterated models - it should stay uncensored after merging and finetuning
- Versatile: basically Llama 3.1 Instruct except uncensored - drink it straight, mix it, finetune it, and make cocktails

Please enjoy responsibly.

Note that this model may still censor at times. If that's undesirable, tell the AI to be more uncensored and uninhibited.

## Safety and risks

- Excessive consumption is bad for your health
- The model can produce harmful, offensive, or inappropriate content if prompted to do so
- The model has weakened safeguards and a lack of moral and ethical judgements
- The user takes responsibility for all outputs produced by the model
- It is recommended to use the model in controlled environments where its risks can be safely managed

## Models used:

- [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated)
- `Llama-3.1-8B-Instruct-abliterated_via_adapter2` (Llama 3.1 adapted version of [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter))
- `Llama3.1-vodka-ported2` (Llama 3.1 adapted verison of [agentlans/Llama3-vodka](https://huggingface.co/agentlans/Llama3-vodka))

The above models were merged onto [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct) using the "task arithmetic" merge method. The model merges and LoRA extractions were done using [mergekit](https://github.com/arcee-ai/mergekit).