metadata
license: apache-2.0
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-2.8B-v1.0
"We must sleep, but AI Never Sleeps!"
Prompt Template
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {prompt}
Assistant:
Simple-Usage
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
text = 'λ€μ΄μ΄νΈμ λ©λ΄λ₯Ό μΆμ²ν΄μ£ΌμΈμ.\n\n(A) μλ¬λ\n(B) μΉν¨\n(C) νΌμ\n(D) νμ€ν'
model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')
outputs = model.generate(**model_inputs, max_new_tokens=256)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(output_text)
Example Output
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: λ€μ΄μ΄νΈμ λ©λ΄λ₯Ό μΆμ²ν΄μ£ΌμΈμ.
(A) μλ¬λ
(B) μΉν¨
(C) νΌμ
(D) νμ€ν
Assistant:
(A) μλ¬λλ₯Ό μΆμ²λ립λλ€. μλ¬λλ μ μΉΌλ‘리μ΄λ©΄μλ μμμκ° νλΆν΄ λ€μ΄μ΄νΈμμΌλ‘ μ ν©ν©λλ€. λ€μν μ±μμ λ¨λ°±μ§μ μΆκ°νμ¬ κ· ν μ‘ν μμ¬λ₯Ό λ§λμ€ μ μμ΅λλ€.
About the Model
First of all, Overwhelming gratitude to 'yanolja/EEVE' Model & Team! This model is a fine-tuned version of crimsonjoo/Neversleep-3B-v0.1, which is a Korean vocabulary-extended version of microsoft/phi-2. Specifically, we utilized Direct Preference Optimization (DPO) through the use of Axolotl.
For more details, please refer to our technical report: Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models.
Training Data
- Korean-translated version of Open-Orca/SlimOrca-Dedup
- Korean-translated version of argilla/ultrafeedback-binarized-preferences-cleaned
- No other dataset was used