Anybody else having the same problem with the model ending answers prematurely?
I ask it to write a chapter about anything, and it only writes the title and nothing else. I have to then ask it to write the missing content.
@ElvisM I noticed a series of similar issues to your missing chapter when using Mistral Small 2409. It's a good LLM, but it's almost as if Mistral forgot to use some of their fine-tuning data.
For example, it failed to do some tasks earlier Mistrals, such as Mixtral 8x7b, reliably did, such as when instructed to re-write a poem so it rhymes. Instead it simply repeated the non-rhyming poem word for word and claimed it now rhymed.
Another example is when instructed to end 8 sentences with a given word it only ended 1 of them with said word, likely by pure chance.
Again, this is a strong model for its size, it just isn't fully fine-tuned and regularly behaves like a base model when doing certain tasks.