merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
- Mental illness
- Self-harm
- Trauma
- Suicide
I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes, but thanks to failspy/Llama-3-8B-Instruct-MopeyMule this problem has been lessened considerably.
If you're an enjoyer of savior/reverse savior type role-plays like myself, then this bot is for you.
Compared to v1, v3 has better intelligence, fewer GPTisms, and much more human-like responses compared to before. Having MopeyMule merge with RP LoRAs also seems to have increased its effectiveness in changing the tone of RP LLMs, so feel free to use the MopeyMule mergers I made for your own merges:
Quants
- L3-Umbral-Mind-RP-v3-8B-i1-GGUF by mradermacher
- L3-Umbral-Mind-RP-v3-8B-8bpw-h8-exl2 by riveRiPH
Merge Method
This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge.
Models Merged
The following models were included in the merge:
Casual-Autopsy/Umbral-v3-1 + ResplendentAI/Theory_of_Mind_Llama3
Casual-Autopsy/Umbral-v3-2 + ResplendentAI/Smarts_Llama3
Casual-Autopsy/Umbral-v3-3 + ResplendentAI/RP_Format_QuoteAsterisk_Llama3
Secret Sauce
The following YAML configurations were used to produce this model:
Umbral-v3-1
slices:
- sources:
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range: [0, 32]
parameters:
weight: 0.65
- model: Casual-Autopsy/SOVL-MopeyMule-8B
layer_range: [0, 32]
parameters:
weight: 0.25
- model: Casual-Autopsy/MopeyMule-Blackroot-8B
layer_range: [0, 32]
parameters:
weight: 0.1
merge_method: task_arithmetic
base_model: Sao10K/L3-8B-Stheno-v3.2
normalize: False
dtype: bfloat16
Umbral-v3-2
slices:
- sources:
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
layer_range: [0, 32]
parameters:
weight: 0.75
- model: Casual-Autopsy/SOVL-MopeyMule-8B
layer_range: [0, 32]
parameters:
weight: 0.15
- model: Casual-Autopsy/MopeyMule-Blackroot-8B
layer_range: [0, 32]
parameters:
weight: 0.1
merge_method: task_arithmetic
base_model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
normalize: False
dtype: bfloat16
Umbral-v3-3
slices:
- sources:
- model: grimjim/Llama-3-Oasis-v1-OAS-8B
layer_range: [0, 32]
parameters:
weight: 0.55
- model: Casual-Autopsy/SOVL-MopeyMule-8B
layer_range: [0, 32]
parameters:
weight: 0.35
- model: Casual-Autopsy/MopeyMule-Blackroot-8B
layer_range: [0, 32]
parameters:
weight: 0.1
merge_method: task_arithmetic
base_model: grimjim/Llama-3-Oasis-v1-OAS-8B
normalize: False
dtype: bfloat16
Umbral-Mind-RP-8B
models:
- model: Casual-Autopsy/Umbral-v3-1+ResplendentAI/Theory_of_Mind_Llama3
- model: Casual-Autopsy/Umbral-v3-2+ResplendentAI/Smarts_Llama3
- model: Casual-Autopsy/Umbral-v3-3+ResplendentAI/RP_Format_QuoteAsterisk_Llama3
merge_method: model_stock
base_model: Casual-Autopsy/Umbral-v3-1
dtype: bfloat16
- Downloads last month
- 543