Edit model card

I have no idea what I’m doing… if this causes the apocalypse someone please let me know.

EVA-Qwen2.5-14B-v0.0 4.0bpw h8 EXL2

Includes measurement.json file for further quantization

Salesforce/xLAM-8x22b-r is on hold for now, probably early next year, need to save some money…

Original Model: https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0

Original Model Card

EVA Qwen2.5 14B

A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.

Prompt format is ChatML.


Recommended sampler values:

  • Temperature: 0.7
  • Top-P: 0.8
  • Repetition Penalty: 1.03

Model appears to prefer lower temperatures (at least 0.8 and lower) and absolutely hate Min-P sampler.

Recommended SillyTavern presets (via CalamitousFelicitousness):


Training data:

  • Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
  • Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
  • A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
  • A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
  • A cleaned subset (~3k rows) of shortstories_synthlabels by Auri
  • Synthstruct and SynthRP datasets by Epiculous

Hardware used:

  • 4xA6000 for 14 hours.

Model was trained by Kearm and Auri.

Special thanks:

  • to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data
  • to Alpindale for helping with FFT config for Qwen2.5
  • and to InfermaticAI's community for their continued support for our endeavors
Downloads last month
10
Inference API
Unable to determine this model's library. Check the docs .

Model tree for FuturisticVibes/EVA-Qwen2.5-14B-v0.0-4.0bpw-h8-exl2

Base model

Qwen/Qwen2.5-14B
Quantized
(39)
this model

Datasets used to train FuturisticVibes/EVA-Qwen2.5-14B-v0.0-4.0bpw-h8-exl2