EOS doesn't seems to work
#21
by
irotem98
- opened
whatever i try i cant get the model to end the sequence.
here's example code that i try to make it work
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
model.eval()
# Check EOS token configuration
print(f"EOS Token ID: {model.config.eos_token_id}")
base_prompt = '''def print_prime(n):
"""
Print all primes between 1 and n
"""
'''
device = 'cuda'
# Set EOS token
model.config.eos_token_id = 50256
tokenizer.eos_token_id = model.config.eos_token_id
with torch.no_grad():
inputs = tokenizer(base_prompt, return_tensors="pt").to(device)
for i in range(2):
output = model.generate(**inputs,
max_length=200,
num_return_sequences=1,
eos_token_id=model.config.eos_token_id,
early_stopping=True)
text = tokenizer.decode(output[0], skip_special_tokens=True)
print(f"Answer {i+1}: {text}")
print('----------------')
This is to be somewhat expected as it's not a finetuned model. Does your output display the eos token? You may also need to play with the no_repeat_ngram_size, to prevent some repetition. Here's a notebook that demonstrates it. For most prompts, it stops- https://colab.research.google.com/drive/12QSdpOqZx697YpmHiZ-SrrejFGAtXnOD?usp=sharing
Let me know if it works for you and if you find issues.
gugarosa
changed discussion status to
closed