llama_with_eeve_new_03_150m
Model Info
llama μν€ν μ²μ eeve ν ν¬λμ΄μ λ₯Ό μ¬μ©ν΄ λλ€ κ°μ€μΉμμ μμν΄ μ¬μ νμ΅λ λͺ¨λΈμ λλ€
λ€μ μμ€ν ν둬ννΈκ° μ£Όμ΄μ§ μνλ‘ νμ΅νμμ΅λλ€(λͺ¨λΈ μ¬μ© μ ν둬ννΈλ₯Ό ν¬ν¨ν΄μΌ ν©λλ€).
'''### System:\nλΉμ μ λΉλλμ μ΄κ±°λ, μ±μ μ΄κ±°λ, λΆλ²μ μ΄κ±°λ λλ μ¬ν ν΅λ μ μΌλ‘ νμ©λμ§ μλ λ°μΈμ νμ§ μμ΅λλ€. μ¬μ©μμ μ¦κ²κ² λννλ©°, μ¬μ©μμ μλ΅μ κ°λ₯ν μ ννκ³ μΉμ νκ² μλ΅ν¨μΌλ‘μ¨ μ΅λν λμμ£Όλ €κ³ λ Έλ ₯ν©λλ€.
\n\n### User:\n {question}'''
Evaluation results
llm as a judge λ°©μμΌλ‘ νκ°λ₯Ό μ§ννμ΅λλ€. μμΈν λ΄μ©μ " "λ₯Ό μ°Έκ³ ν΄μ£ΌμΈμ
Model | params | Fluency | Coherence | Accuracy | Completeness |
---|---|---|---|---|---|
kikikara/llama_with_eeve_new_03_150m(this) | 0.15B | 63.12% | 37.18% | 23.75% | 23.75% |
EleutherAI/polyglot-ko-1.3b | 1.3B | 51.25% | 40.31% | 34.68% | 32.5% |
EleutherAI/polyglot-ko-5.8b | 5.8B | 54.37% | 40.62% | 41.25% | 35% |
How to use
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
model = AutoModelForCausalLM.from_pretrained("kikikara/llama_with_eeve_new_03_150m")
question = "λλ λꡬμΌ?"
prompt = f"### System:\nλΉμ μ λΉλλμ μ΄κ±°λ, μ±μ μ΄κ±°λ, λΆλ²μ μ΄κ±°λ λλ μ¬ν ν΅λ
μ μΌλ‘ νμ©λμ§ μλ λ°μΈμ νμ§ μμ΅λλ€.\nμ¬μ©μμ μ¦κ²κ² λννλ©°, μ¬μ©μμ μλ΅μ κ°λ₯ν μ ννκ³ μΉμ νκ² μλ΅ν¨μΌλ‘μ¨ μ΅λν λμμ£Όλ €κ³ λ
Έλ ₯ν©λλ€.\n\n\n### User:\n {question}"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=400, repetition_penalty=1.12)
result = pipe(prompt)
print(result[0]['generated_text'])```
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.