File size: 2,472 Bytes
386932c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2892b92
 
386932c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
base_model:
- mistralai/Mistral-Small-Instruct-2409
- TheDrummer/Cydonia-22B-v1.1
library_name: transformers
tags:
- mergekit
- merge
license: other
---
![The Drummer turns into a Joshi Youchien](https://huggingface.co/knifeayumu/Lite-Cydonia-22B-v1.1-50-50/resolve/main/Lite-Cydonia-22B-v1.1.png)
"A balancing act between smart do-gooders and creative evil-doers."

# The Drummer turns into a Joshi Youchien

This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).

GGUF quants : [knifeayumu/Lite-Cydonia-22B-v1.1-50-50-GGUF](https://huggingface.co/knifeayumu/Lite-Cydonia-22B-v1.1-50-50-GGUF)

## Inspiration

I thought both [BeaverAI/Cydonia-22B-v1f-GGUF](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) and [BeaverAI/Cydonia-22B-v1e-GGUF](https://huggingface.co/BeaverAI/Cydonia-22B-v1e-GGUF) versions were a bit too evil. The sense of morality was too screwed up, and it was quite deterministic (swipes didn't offer much variety) compared to the base model. Then an idea popped into my mind — why not merge it back with the base model? That way, we could give it a sense of "good" again, at least a little. Maybe this would also fix some of the deterministic generations.

Quick testing shows... it works? Zero-shot evil Q&A no longer works, but with a bit of persuasion, it did answer. Unlike [knifeayumu/Lite-Cydonia-22B-v1.1-75-25](https://huggingface.co/knifeayumu/Lite-Cydonia-22B-v1.1-75-25), this merge is more censored but also smarter.

Credits to [TheDrummer](https://huggingface.co/TheDrummer) and [BeaverAI](https://huggingface.co/BeaverAI) who makes such finetunes. "Lightly decensored" is a heavy understatement in this case.

## Merge Details
### Merge Method

This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) as a base.

### Models Merged

The following models were included in the merge:
* [TheDrummer/Cydonia-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: mistralai/Mistral-Small-Instruct-2409
    parameters:
      weight: 0.5
  - model: TheDrummer/Cydonia-22B-v1.1
    parameters:
      weight: 0.5
merge_method: task_arithmetic
base_model: mistralai/Mistral-Small-Instruct-2409
dtype: float16
```