Edit model card

#roleplay #sillytavern #llama3

My GGUF-IQ-Imatrix quants for Sao10K/L3-8B-Stheno-v3.2.

Sao10K with Stheno again, another banger! I recommend checking his page for feedback and support.

Quantization process:
For future reference, these quants have been done after the fixes from #6920 have been merged.
Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF.
This was a bit more disk and compute intensive but hopefully avoided any losses during conversion.
If you noticed any issues let me know in the discussions.

General usage:
Use the latest version of KoboldCpp.
For 8GB VRAM GPUs, I recommend the Q4_K_M-imat (4.89 BPW) quant for up to 12288 context sizes.

Presets:
Some compatible SillyTavern presets can be found here (Virt's Roleplay Presets).
Check discussions such as this one for other recommendations and samplers.

Personal-support:
I apologize for disrupting your experience.
Currently I'm working on moving for a better internet provider.
If you want and you are able to...
You can spare some change over here (Ko-fi).

Author-support:
You can support the author at their own page.

image/png

Click here for the original model card information.

Support me here if you're interested:
Ko-fi: https://ko-fi.com/sao10k
wink Euryale v2?

If not, that's fine too. Feedback would be nice.

Contact Me in Discord:
sao10k

Art by navy_(navy.blue) - Danbooru


Stheno

Stheno-v3.2-Zeta

I have done a test run with multiple variations of the models, merged back to its base at various weights, different training runs too, and this Sixth iteration is the one I like most.

Changes compared to v3.1
- Included a mix of SFW and NSFW Storywriting Data, thanks to Gryphe
- Included More Instruct / Assistant-Style Data
- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
- Hyperparameter tinkering for training, resulting in lower loss levels.

Testing Notes - Compared to v3.1
- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
- Better at Storywriting / Narration.
- Better at Assistant-type Tasks.
- Better Multi-Turn Coherency -> Reduced Issues?
- Slightly less creative? A worthy tradeoff. Still creative.
- Better prompt / instruction adherence.


Recommended Samplers:

Temperature - 1.12-1.22
Min-P - 0.075
Top-K - 50
Repetition Penalty - 1.1

Stopping Strings:

\n\n{{User}} # Or Equivalent, depending on Frontend
<|eot_id|>
<|end_of_text|>

Prompting Template - Llama-3-Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Basic Roleplay System Prompt

You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model.
Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.
Downloads last month
15,299
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix

Quantized
(19)
this model

Collection including Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix