When input tokens < 4096 but total input+output tokens >4096 the model produces poor output

#85
by einsteiner1983 - opened

We have found that something is happening at 4096 tokens
Screenshot 2024-07-02 at 8.59.53β€―AM.png

I am having the same problem. I can't get the model to work with more than 4k tokens. Any help would be appreciated.

@einsteiner1983 I figured out that there are 2 parameters when you initialize the model that default to 4096:

max_position_embeddings
original_max_position_embeddings

If you pass them to the model initializing AutoModelForCausalLM.from_pretrained and you set them to a higher number, it will generate properly past the 4096. However, I am still having issues here.
The model generates the same last paragraph again and again until it decides to stop. There is something else I am missing...
Can someone help?

Ya here at NVIDIA we have not figured it out, I thought it might be a TRT-LLM issue but it happens on the HF model as well

I have the same problem.

I just ran into the same problem. Whenever the prompt is slightly below 4096 tokens and the generation crosses that 4096 boundary, the entire generation afterward is completely gibberish.
I also tried the same prompt in llama_cpp but they don't have the same problem (at least for my short test).
In a long issue regarding the phi3 implementation on llama_cpp, they describe that dynamic switching from short_factor to long_factor is not possible based on their tests.
I would understand the transformers code in such a way, that they do exactly that dynamic switching though.
Wasn't this exact issue already discussed in this discussion?
Why was that closed / not implemented / not fixed? Am I missing something?

I simply removed the short_factor which scales the inv_freq and there is no longer incoherent outputs if generation crosses the 4096 threshold. Surprisingly, i don't see incoherent outputs with seq len < 4096. Maybe not ideal but, better than the former.

Thanks @einsteiner1983 report the bug. The recent Phi3.5 release (https://huggingface.co/microsoft/Phi-3.5-mini-instruct) addressed this issue in the release remote code, worthwhile for a trial. The HF transformer repo bug fix PR is ongoing: https://github.com/huggingface/transformers/pull/33129. Thanks for the patience.

Sign up or log in to comment