Brief feedback.
Has potential but right now is too incoherent and has formatting issues similar to Sao10K/L3.1-8B-Niitama-v1.1 but worse. Seems to insist on using action (asterisk), dialogue (plain text) format even with examples, which isn't the end of the world, I don't mind too much but even when using this format there are other problems like randomly extremely short responses even when previous responses were of longer length. I used a 5.0 bpw exl2 quant.
I think character card following wise it is better, more personality and feels less censored and more fun to chat with. Seems to also be more creative to me. if intelligence can be improved and formatting issues solved then it will be better probably than Stheno 3.2 in my opinion.
Still, very happy that you decided to make this model, was checking your page for this one everyday, thanks for making it, it has huge potential.
if it can be upsacled to 12 ot 13b and trained again to make it coherent that will be good, but no need. it's coherent enough like nemo 12b.
@Herman555 bro the whole community wants the opposite of what you're saying, this model gives responses as character.ai do.
You speak for the whole community? lol. I'm just comparing the model to Stheno 3.2 which is the best version currently imo, and seems to be recommended often.
I haven't seen the creator of the model say anywhere that the aim of this model is to have character.ai like responses. Stheno 3.2 supported multiple formats with no problems, although in my testing, action (asterisk), dialogue (quatation marks) seemed to give the highest quality outputs. This makes it look as if there is an issue with formatting with this model but maybe I'm wrong and it is simply the intended format.
Anyways, I'm happy you are enjoying the model.
I honestly don't think this has future potential, Stheno 3.2 is still my favorite(Niitama is great too), but what's going on in 3.4 has nothing to do with what Sao did. Llama3.1 8B has apparently immutable training issues introduced by Meta themselves. Every L3.1 RP finetune is lobotomized simply because L3.1 is dumb as bricks, even compared to L3. Right now if you want capable high context finetunes we only have Mistral 12B.
Unless there's some way to beat the dumb out of Llama3.1 this will always be disappointing.
I honestly don't think this has future potential, Stheno 3.2 is still my favorite(Niitama is great too), but what's going on in 3.4 has nothing to do with what Sao did. Llama3.1 8B has apparently immutable training issues introduced by Meta themselves. Every L3.1 RP finetune is lobotomized simply because L3.1 is dumb as bricks, even compared to L3. Right now if you want capable high context finetunes we only have Mistral 12B.
Unless there's some way to beat the dumb out of Llama3.1 this will always be disappointing.
Interesting, thanks for your input. This explains why every Llama 3.1 model has been so disappointing. Don't know what Meta was cooking with this one.
I think this is my favorite version at least as far as prose goes. It has a more playful tone and feels closer to what human roleplay feels like.