์ฑ๊ท ๊ด๋ํ๊ต ์ฐํํ๋ ฅ ๋ฐ์ดํฐ๋ก ๋ง๋ ํ
์คํธ ๋ชจ๋ธ์
๋๋ค.
๊ธฐ์กด 10๋ง 7์ฒ๊ฐ์ ๋ฐ์ดํฐ + 2์ฒ๊ฐ ์ผ์๋ํ ์ถ๊ฐ ๋ฐ์ดํฐ๋ฅผ ์ฒจ๊ฐํ์ฌ ํ์ตํ์์ต๋๋ค.
๋ชจ๋ธ์ EleutherAI/polyglot-ko-5.8b๋ฅผ base๋ก ํ์ต ๋์์ผ๋ฉฐ
ํ์ต parameter์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
batch_size: 128
micro_batch_size: 8
num_epochs: 3
learning_rate: 3e-4
cutoff_len: 1024
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
weight_decay: 0.1
์ธก์ ํ kobest 10shot ์ ์๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
๋ชจ๋ธ prompt template๋ kullm์ template๋ฅผ ์ฌ์ฉํ์์ต๋๋ค.
ํ
์คํธ ์ฝ๋๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
https://colab.research.google.com/drive/1xEHewqHnG4p3O24AuqqueMoXq1E3AlT0?usp=sharing
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_name="jojo0217/ChatSKKU5.8B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
load_in_8bit=True,#๋ง์ฝ ์์ํ ๋๊ณ ์ถ๋ค๋ฉด false
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=model_name,
device_map="auto"
)
def answer(message):
prompt=f"์๋๋ ์์
์ ์ค๋ช
ํ๋ ๋ช
๋ น์ด์
๋๋ค. ์์ฒญ์ ์ ์ ํ ์๋ฃํ๋ ์๋ต์ ์์ฑํ์ธ์.\n\n### ๋ช
๋ น์ด:\n{message}"
ans = pipe(
prompt + "\n\n### ์๋ต:",
do_sample=True,
max_new_tokens=512,
temperature=0.7,
repetition_penalty = 1.0,
return_full_text=False,
eos_token_id=2,
)
msg = ans[0]["generated_text"]
return msg
answer('์ฑ๊ท ๊ด๋ํ๊ต์๋ํด ์๋ ค์ค')
- Downloads last month
- 4,484
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.