Can you provide the correct prompt template to use this model as a translator
I'd like to use this model as a personal translator. However, sometimes, the model do strange things. For example, if I ask model to translate the input from Spanish to Italian, my output is in Spanish.
I suppose, as it's a multilingual model, there should be some "trigger" (whether it be a prompt template or whatever) to force model act as a translator.
I would appreciate for any suggestion!
Hi
@alexcardo
, we evaluated translation performance using this prompt:Translate from {src_lang} into {tgt_lang}:\n
However, the model should be used with it's chat template similar to this one:
messages = [{"role": "user", "content": "Translate from English into Turkish:\n This is a multilingual model"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_new_tokens=256,
)
Thank you for your response! Unfortunately, I have a low GPU machine :-( I thereby is forced to use the quantized model. Talking about the pompt template, I mean this one:
<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
This prompt template is attached to the quantized versions. Can you please correct it with a simple example? I need to explain model that it should translate from one language to another. For example from Spanish to Italian.
The examples I posted will be same for also for quantized versions. If you use tokenizer.apply_chat_template
, it will generate this:<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
Therefore no special format for translation, you can test it using a prompt like this: Translate from {src_lang} into {tgt_lang}:\n {src_text}