YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Xwin-MLewd-13B-V0.2 - GGUF
- Model creator: https://huggingface.co/Undi95/
- Original model: https://huggingface.co/Undi95/Xwin-MLewd-13B-V0.2/
Name | Quant method | Size |
---|---|---|
Xwin-MLewd-13B-V0.2.Q2_K.gguf | Q2_K | 4.52GB |
Xwin-MLewd-13B-V0.2.IQ3_XS.gguf | IQ3_XS | 4.99GB |
Xwin-MLewd-13B-V0.2.IQ3_S.gguf | IQ3_S | 5.27GB |
Xwin-MLewd-13B-V0.2.Q3_K_S.gguf | Q3_K_S | 5.27GB |
Xwin-MLewd-13B-V0.2.IQ3_M.gguf | IQ3_M | 5.57GB |
Xwin-MLewd-13B-V0.2.Q3_K.gguf | Q3_K | 5.9GB |
Xwin-MLewd-13B-V0.2.Q3_K_M.gguf | Q3_K_M | 5.9GB |
Xwin-MLewd-13B-V0.2.Q3_K_L.gguf | Q3_K_L | 6.45GB |
Xwin-MLewd-13B-V0.2.IQ4_XS.gguf | IQ4_XS | 6.54GB |
Xwin-MLewd-13B-V0.2.Q4_0.gguf | Q4_0 | 6.86GB |
Xwin-MLewd-13B-V0.2.IQ4_NL.gguf | IQ4_NL | 6.9GB |
Xwin-MLewd-13B-V0.2.Q4_K_S.gguf | Q4_K_S | 6.91GB |
Xwin-MLewd-13B-V0.2.Q4_K.gguf | Q4_K | 7.33GB |
Xwin-MLewd-13B-V0.2.Q4_K_M.gguf | Q4_K_M | 7.33GB |
Xwin-MLewd-13B-V0.2.Q4_1.gguf | Q4_1 | 7.61GB |
Xwin-MLewd-13B-V0.2.Q5_0.gguf | Q5_0 | 8.36GB |
Xwin-MLewd-13B-V0.2.Q5_K_S.gguf | Q5_K_S | 8.36GB |
Xwin-MLewd-13B-V0.2.Q5_K.gguf | Q5_K | 8.6GB |
Xwin-MLewd-13B-V0.2.Q5_K_M.gguf | Q5_K_M | 8.6GB |
Xwin-MLewd-13B-V0.2.Q5_1.gguf | Q5_1 | 9.1GB |
Xwin-MLewd-13B-V0.2.Q6_K.gguf | Q6_K | 9.95GB |
Xwin-MLewd-13B-V0.2.Q8_0.gguf | Q8_0 | 12.88GB |
Original model description:
license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw
THIS MODEL IS MADE FOR LEWD
SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED
This is MLewd merged with Xwin-LM/Xwin-LM-13B-V0.2
Description
This repo contains fp16 files of Xwin-MLewd-13B-V0.2, very hot and lewd model based on Xwin 0.2 13B.
Models and loras used
- Undi95/ReMM-S-Light (base/private)
- Undi95/CreativeEngine
- Brouz/Slerpeno
- The-Face-Of-Goonery/Huginn-v3-13b
- zattio770/120-Days-of-LORA-v2-13B
- PygmalionAI/pygmalion-2-13b
- Undi95/StoryTelling
- TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
- nRuaif/Kimiko-v2-13B
- The-Face-Of-Goonery/Huginn-13b-FP16
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Xwin-LM/Xwin-LM-13B-V0.2
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
The secret sauce
slices:
- sources:
- model: Xwin-LM/Xwin-LM-13B-V0.2
layer_range: [0, 40]
- model: Undi95/MLewd-v2.4-13B
layer_range: [0, 40]
merge_method: slerp
base_model: Xwin-LM/Xwin-LM-13B-V0.2
parameters:
t:
- filter: lm_head
value: [0.55]
- filter: embed_tokens
value: [0.7]
- filter: self_attn
value: [0.65, 0.35]
- filter: mlp
value: [0.35, 0.65]
- filter: layernorm
value: [0.4, 0.6]
- filter: modelnorm
value: [0.6]
- value: 0.5 # fallback for rest of tensors
dtype: float16
Special thanks to Sushi and Shena ♥
If you want to support me, you can here.
- Downloads last month
- 71