Edit model card

llama_with_eeve_new_03_150m

Model Info

llama μ•„ν‚€ν…μ²˜μ™€ eeve ν† ν¬λ‚˜μ΄μ €λ₯Ό μ‚¬μš©ν•΄ 랜덀 κ°€μ€‘μΉ˜μ—μ„œ μ‹œμž‘ν•΄ μ‚¬μ „ν•™μŠ΅λœ λͺ¨λΈμž…λ‹ˆλ‹€

image/png

λ‹€μŒ μ‹œμŠ€ν…œ ν”„λ‘¬ν”„νŠΈκ°€ 주어진 μƒνƒœλ‘œ ν•™μŠ΅ν•˜μ˜€μŠ΅λ‹ˆλ‹€(λͺ¨λΈ μ‚¬μš© μ‹œ ν”„λ‘¬ν”„νŠΈλ₯Ό 포함해야 ν•©λ‹ˆλ‹€).

'''### System:\n당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. μ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€.

\n\n### User:\n {question}'''

Evaluation results

llm as a judge λ°©μ‹μœΌλ‘œ 평가λ₯Ό μ§„ν–‰ν–ˆμŠ΅λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ " "λ₯Ό μ°Έκ³ ν•΄μ£Όμ„Έμš”

Model params Fluency Coherence Accuracy Completeness
kikikara/llama_with_eeve_new_03_150m(this) 0.15B 63.12% 37.18% 23.75% 23.75%
EleutherAI/polyglot-ko-1.3b 1.3B 51.25% 40.31% 34.68% 32.5%
EleutherAI/polyglot-ko-5.8b 5.8B 54.37% 40.62% 41.25% 35%

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
model = AutoModelForCausalLM.from_pretrained("kikikara/llama_with_eeve_new_03_150m")

question = "λ„ˆλŠ” λˆ„κ΅¬μ•Ό?"

prompt = f"### System:\n당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.\nμ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€.\n\n\n### User:\n {question}"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=400, repetition_penalty=1.12)
result = pipe(prompt)

print(result[0]['generated_text'])```

Downloads last month
18
Safetensors
Model size
150M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train kikikara/llama_with_eeve_new_03_150m