Trained on ChatGPT Responses?
I told falcon-mamba its own name, but falcon-mamba told me it was not falcon-mamba. Rather, it said it was a model trained by OpenAI. Were ChatGPT responses used to train this model? If so, is this data part of RefinedWeb? If not, what’s the issue?
Hi
@astrologos
Thanks for your message. I can confirm we did not trained the model explicitly on ChatGPT prompts. However note that the common web datasets (including instruction datasets) that are present nowadays to train and fine tune LLMs are quite contaminated with synthetic data generated from LLMs such as ChatGPT (see for example the screenshot below, taken from: https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1).
One technique to overcome this is to perform further model alignment by doing identification tuning so that the model "knows" he is not ChatGPT but another LLM. I suggest to stay up to date in this organization as we will release more models in the future which are better in all sense, including identification and alignment
Hi @ybelkada , many thanks for your detailed response. I have a lot of hope for SSMs. Hopefully someday we'll see constant-time Deep Koopman LLMs ;)