Is the original model allganize/Llama-3-Alpha-Ko-8B-Instruct?
Is the original model
https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct
?
Yes
Is the original model
https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct
?
Yes
thanks. Can I ask you one more thing? the model answer is strange.
model = Llama.from_pretrained(
repo_id='QuantFactory/Llama-3-Alpha-Ko-8B-Instruct-GGUF',
filename='Llama-3-Alpha-Ko-8B-Instruct.Q4_0.gguf',
n_gpu_layers=-1,
chat_format = 'llama-3',
)
output = model.create_chat_completion(
messages=[
{"role": "system", "content": 'you are kind assistant.'},
{"role": "user", "content": "안녕"}
]
)
then, the output is
<?utzerutzer<?<? Destruction<?<?<?utzer_<?<?utzer姫<?<?<?<?<?<?<?<?utzer<?<?utzerutzer<?<?<?<?<?<?utzer<? Rica_<?<?<?<?<?<?<?utzerutzer_<?utzer_<?<?<?utzer_<?utzer_<?<?<?<?<?<? sna<?<?utzer<?<?<?<?<?<?<?<?utzer<?<?utzer<?<?<?<?<?<?<?<?<?<?<?<?<?<?<?<?utzer姫<?<?utzer<?<?<?utzer
i dont understand. The code is the same as when running other models, but only this model responds strangely.
am i something wrong?
Hey @coconut00 ,I have updated the files, they should be working now