do your stuff
I realy like your models. such as una-cybertron-7b-v3-oma.Q8_0
I was wondering if you can do your stuff on Eric111/openchat-3.5-0106-128k-DPO-GGUF or https://huggingface.co/Eric111/openchat-3.5-0106-128k-DPO
This model is very verbose and it's on top of my list but your models seems to have more clarity and structured approach while maintaining creativity.
I'm wondering if we can keep the best from both worlds. I don't care about benchmark evaluations but more about creativity, clarity and verbose answers.
Thanks in advance
I'm not afiliated with that model in any way.
The result will be a more powerful model about 2%~+
But they just released 3.6, so it should be better to wait that up (in respect of the Author momentum) and assume that the dataset u are suggesting .. is already part of the model corpora.
BTW, this openchat cannot be trianed with axolotl right? I only accept train requests using axolotl YAML formats.
It's mpre about the style of answers than on benchmarks performance. The 3.6 version although more knowlegeble is not more verbose than other models.
Related to the training with axolotl or not - I don't know, I just like your Una models and I like that specific Openchat model :)