Edit model card

image/png The training dataset consists of 2k (longest) examples from no_robots, reddit_instruct, dolly, OpenOrca plus two other personal datasets.

Please use with ChatML and the default system message or enter your own. It was trained with various system messages, the one in the config being the default one.

The model is:

  • Very good at generating long and coherent text.

  • Creative due to data from Reddit ELI5 and a few other sources.

  • Better at handling longer input.

  • Not great with short text both in input and generation.

The aim is to see how the "Long is More for Alignment" paper holds. This is basically a combination of LIMA + LMA. There should be no benchmark contamination as far as I am aware of. Around 70% of the data is from the mentioned datasets. I am happy with how it turned out.

image/png

Downloads last month
68
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Ba2han/Cucumber-7b-10k