--- license: apache-2.0 datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - allura-org/shortstories_synthlabels base_model: - Qwen/Qwen2.5-14B --- [This is the EXL2 4bpw version of this model. For the original model, go here.](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1)
[For the 8bpw version, go here](https://huggingface.co/Statuo/EVA-UNIT-01_EVA-Qwen2.5-14B-v0.1-EXL2-8bpw)
[For the 6bpw version, go here](https://huggingface.co/Statuo/EVA-UNIT-01_EVA-Qwen2.5-14B-v0.1-EXL2-6bpw)
**EVA Qwen2.5 14B 0.1**

A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-7B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.

Version 0.1 notes:
Dataset was deduped and cleaned from version 0.0, sequence length was also increased. Resulting model seems to be stabler, and 0.0 problems with handling short inputs and min_p sampling seem to be gone.
This version seems to be more or less optimal for the current data and available compute.

Note: using quantized KV cache with Qwen2.5 is not recommended and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.

Prompt format is ChatML.


Recommended sampler values:

Recommended SillyTavern presets (via CalamitousFelicitousness):

- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json) - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)


Training data:

Training time and hardware:


Model was trained by Kearm and Auri.

Special thanks: