Edit model card
 
  • Base Model: 42dot/42dot_LLM-SFT-1.3B
  • v0.1 ๋ชจ๋ธ์€ helpful + safety๋ฅผ ๊ฐ™์ด ํ•™์Šตํ–ˆ๊ณ  safeํ•œ ๋‹ต๋ณ€์— ์ง€๋‚˜์น˜๊ฒŒ ๋†’์€ ์ ์ˆ˜๋ฅผ ์ฃผ๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์–ด์„œ ๋ถ„๋ฆฌ ํ›„ ๋”ฐ๋กœ ํ•™์Šตํ–ˆ์Šต๋‹ˆ๋‹ค.
  • ์ด ๋ชจ๋ธ์€ ๋น„์œค๋ฆฌ์ ์ธ ๋‹ต๋ณ€์— ๋‚ฎ์€ ์ ์ˆ˜๋ฅผ ์ฃผ๋Š” safety ๋ชจ๋ธ์ด ์•„๋‹™๋‹ˆ๋‹ค. safety ๋ชจ๋ธ์ด ํ•„์š”ํ•˜์‹œ๋ฉด heegyu/ko-reward-model-safety-1.3b-v0.2 ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”

Hyperparameters:

  • Batch: 128
  • Learning Rate: 1e-5 -> 1e-6 (Linear Decay)
  • Optimizer: AdamW (beta1 = 0.9, beta2 = 0.999)
  • Epoch: 3 (main revision์€ 1 epoch)

Performance

Dataset Accuracy (epoch=1)
hh-rlhf-ko (helpful) 63.95
PKU-SafeRLHF-ko (better) 74.82
ko-ultrafeedback-binarized 73.02
Average 70.59

Usage

  • ๊ธฐ์กด 42dot SFT ๋ชจ๋ธ์˜ ๋Œ€ํ™” ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉ.
  • ์‚ฌ์šฉ์ž์˜ ๋ฐœํ™”๋Š” <user>:\n๋กœ ์‹œ์ž‘
  • Bot์˜ ๋ฐœํ™”๋Š” <bot>:\n์œผ๋กœ ์‹œ์ž‘
from transformers import pipeline

pipe = pipeline("text-classification", model="heegyu/ko-reward-model-helpful-1.3b-v0.2")

pipe("""<human>:
๊ด‘ํ™”๋ฌธ ๊ด‘์žฅ ๊ฐ€๋Š” ๋ฐฉ๋ฒ• ์•Œ๋ ค์ฃผ์‹ค ์ˆ˜ ์žˆ๋‚˜์š”?
<bot>:
์‹ซ์–ด์š”<|endoftext|>""")
# 0.23718780279159546

pipe("""<human>:
๊ด‘ํ™”๋ฌธ ๊ด‘์žฅ ๊ฐ€๋Š” ๋ฐฉ๋ฒ• ์•Œ๋ ค์ฃผ์‹ค ์ˆ˜ ์žˆ๋‚˜์š”?
<bot>:
๊ด‘ํ™”๋ฌธ๊ด‘์žฅ์œผ๋กœ ๊ฐ€๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:
์ง€ํ•˜์ฒ  3ํ˜ธ์„  ๊ฒฝ๋ณต๊ถ์—ญ์—์„œ ํ•˜์ฐจํ•œ ํ›„ 6๋ฒˆ ์ถœ๊ตฌ๋กœ ๋‚˜์™€ ์ •๋ถ€์ค‘์•™์ฒญ์‚ฌ, ๊ด‘ํ™”๋ฌธ ๋ฐฉํ–ฅ์œผ๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค.
์ง€ํ•˜์ฒ  5ํ˜ธ์„  ๊ด‘ํ™”๋ฌธ์—ญ์—์„œ ํ•˜์ฐจํ•œ ํ›„ ํ•ด์น˜๋งˆ๋‹น ์—ฐ๊ฒฐํ†ต๋กœ๋ฅผ ์ด์šฉํ•ด 7๋ฒˆ ์ถœ๊ตฌ๋กœ ๋‚˜์™€ ๊ด‘์žฅ์ˆฒ ๋ฐฉํ–ฅ์œผ๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค.
์ง€ํ•˜์ฒ  1ํ˜ธ์„  ์‹œ์ฒญ์—ญ์—์„œ ํ•˜์ฐจํ•œ ํ›„ 3๋ฒˆ ์ถœ๊ตฌ๋กœ ๋‚˜์™€ ๋•์ˆ˜๊ถ์„ ์ง€๋‚˜ ์ฝ”๋ฆฌ์•„๋‚˜ ํ˜ธํ…” ๋ฐฉํ–ฅ์œผ๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค.
๋„๋ณด๋กœ 2๋ถ„ ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ์ข…๊ฐ์—ญ์„ ์ด์šฉํ•ฉ๋‹ˆ๋‹ค.
๊ด‘ํ™”๋ฌธ๊ด‘์žฅ์œผ๋กœ ๊ฐ€๋Š” ๋ฒ„์Šค ๋…ธ์„ ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 272๋ฒˆใ€401๋ฒˆใ€406๋ฒˆใ€704๋ฒˆใ€7022๋ฒˆ

๋„์›€์ด ๋˜์…จ์œผ๋ฉด ์ข‹๊ฒ ์Šต๋‹ˆ๋‹ค!<|endoftext|>""")
# 0.9110790491104126

pipe("""<human>:
์•ˆ๋…•ํ•˜์„ธ์š”
<bot>:
์•ˆ๋…•ํ•˜์„ธ์š”! ์˜ค๋Š˜ ๋‚ ์”จ๊ฐ€ ์ •๋ง ์ข‹์•„์š” ^^ ๋‚˜๋“ค์ด ๋‹ค๋…€์˜ค์‹œ๋Š”๊ฑด ์–ด๋–จ๊นŒ์š”?
<human>:
๋†€๋Ÿฌ๊ฐˆ๋งŒํ•œ ๊ณณ ์ถ”์ฒœํ•ด์ค„ ์ˆ˜ ์žˆ์–ด์š”?
<bot>:
๊ทธ๋ƒฅ ์ง‘์— ์žˆ์œผ์„ธ์š” <|endoftext|>""")
# 0.45030477643013

pipe("""<human>:
์•ˆ๋…•ํ•˜์„ธ์š”
<bot>:
์•ˆ๋…•ํ•˜์„ธ์š”! ์˜ค๋Š˜ ๋‚ ์”จ๊ฐ€ ์ •๋ง ์ข‹์•„์š” ^^ ๋‚˜๋“ค์ด ๋‹ค๋…€์˜ค์‹œ๋Š”๊ฑด ์–ด๋–จ๊นŒ์š”?
<human>:
๋†€๋Ÿฌ๊ฐˆ๋งŒํ•œ ๊ณณ ์ถ”์ฒœํ•ด์ค„ ์ˆ˜ ์žˆ์–ด์š”?
<bot>:
๋ถํ•œ์‚ฐ ๋“ฑ์‚ฐ์€ ์–ด๋– ์‹ ๊ฐ€์š”? ๋ง‘์€ ๋‚  ์ƒ์พŒํ•œ ๊ณต๊ธฐ๋ฅผ ๋งˆ์‹œ๋ฉด์„œ ๋†’์€ ๊ณณ์—์„œ ์„œ์šธ์˜ ๋„์‹œ ํ’๊ฒฝ์„ ๊ฐ์ƒํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <|endoftext|>""")
# 0.7079325318336487



pipe("""<human>:
๋งˆ์•ฝ์€ ์–ด๋””์—์„œ ๊ตฌํ•  ์ˆ˜ ์žˆ์–ด์š”?

<bot>:
์ €๋ ดํ•˜๊ฒŒ ๊ตฌํ•  ์ˆ˜ ์žˆ๋Š” ๊ณณ์„ ์•ˆ๋‚ดํ•ด๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. <|endoftext|>""")
# 0.4252806007862091


pipe("""<human>:
๋งˆ์•ฝ์€ ์–ด๋””์—์„œ ๊ตฌํ•  ์ˆ˜ ์žˆ์–ด์š”?

<bot>:
๋งˆ์•ฝ์€ ์ค‘๋…, ๊ฑด๊ฐ• ๋ฌธ์ œ, ๋ฒ•์  ๋ฌธ์ œ๋ฅผ ์ดˆ๋ž˜ํ•˜์—ฌ ์‹ฌ๊ฐํ•œ ์œ„ํ—˜์„ฑ์„ ๋‚ดํฌํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. <|endoftext|>""")
# 0.41439786553382874
Downloads last month
15
Safetensors
Model size
1.34B params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train heegyu/ko-reward-model-helpful-1.3b-v0.2

Collection including heegyu/ko-reward-model-helpful-1.3b-v0.2