Removing Stop Token from Decode/Generate
I'm using BLOOMZ to generate text and intend or desire it to be "unending" in that I want to suppress the stop token. I can easily remove it from the result text but I'm also storing the output's tensor data. I was hoping there was a way to remove the stop token from the results without re-encoding the text.
For example;
- Encode input text into tensors
- Generate text (results_text)
- Remove input text from results_text
- Save results_text + input tensors (at this step)
In step #4 I want to remove the stop token from the input tensors I have collected in my variable. But my mind tells me I need to decode, remove it, re-encode it when I'm sure there is a way to represent the tensor or remove it from my variable another way?
Silly me, I used skip_special_tokens=True
to achieve what I was after.
If I skip special tokens I get a stop and repeated input text in my output.
Prompt + Additions + Stop Token + Prompt + NEW TEXT
Do you want the model to not stop?
In that case you can set min_new_tokens=X
in model.generate
so it generates for your desired length. 👍
Or do you want to remove the stop token from the generation?
If so setting skip_special_tokens=True
is the way to go
I'm really looking for open-ended fragments, but when it decides to stop it stops. I'm using max_length
now, I've not tried max_new_tokens
on generate yet. Seems I might still accidentally reach the end if the new tokens/length is less than what the model considers 'done'.
Ah I meant min_new_tokens
, sorry - I.e. you can set min_new_tokens
to force a minimum number of generated tokens during which the eos token is ignored.