Zephyr hallucinations with conversational memory
When Zephyr is supplied with a short memory buffer (4 entries), it seems that it occasionally hallucinates when the answer is not available in the main text. Even providing guidance to, for example, to "handle the conversation history with care" seems to be ignored.
We've noticed that this behaviour does not seem to happen with Gpt3.5-turbo or Mistral.
More information and examples here: https://github.com/lfoppiano/document-qa/pull/23
You can test the behaviour here: https://huggingface.co/spaces/lfoppiano/document-qa-develop
So, how to fix it ?
I've modified the prompt in many ways but none worked.
Building a RAG based on Zephyr might backfire. So, I just removed the conversational memory when using Zephyr.
I think it's better to just use mistral-instruct or, even better, mixtral