Will Consider Continue SFT?

#1
by kevinpro - opened

Thank you for your awesome work! And I am surprised about the data and the performance of LLM tuned on these data.

I notice that you guys prefer to train the model from the base model and achieve the comparable performance with the official released model.
Will you consider continuing SFT on the official released Chat Model, and will it bring further promising performance?

kevinpro changed discussion status to closed
kevinpro changed discussion status to open
Beijing Academy of Artificial Intelligence org

Apologies for my late reply. We will of course continue to post new versions of the SFT data, as well as fine-tuned versions of the new model on Infinity-Instruct. You can keep an eye on our dataset project page (https://huggingface.co/datasets/BAAI/Infinity-Instruct) for our new releases. Thanks

Sign up or log in to comment