koOpenChat-sft๐ง
Support Me
์๋ํธ๋ผ๋ ๊ฐ์ธ ํ๋ก์ ํธ๋ก, 1์ธ์ ์์์ผ๋ก ๊ฐ๋ฐ๋๊ณ ์์ต๋๋ค. ๋ชจ๋ธ์ด ๋ง์์ ๋์ จ๋ค๋ฉด ์ฝ๊ฐ์ ์ฐ๊ตฌ๋น ์ง์์ ์ด๋จ๊น์?
Wanna be a sponser? (Please) Contact me on Telegram AlzarTakkarsen
Model Details
Base Model
OpenChat3.5
Trained On
A100 80GB * 1
Instruction format
It follows ChatML format and Alpaca(No-Input) format.
Model Benchmark
None
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/koOpenChat-sft")
tokenizer = AutoTokenizer.from_pretrained("maywell/koOpenChat-sft")
messages = [
{"role": "user", "content": "๋ฐ๋๋๋ ์๋ ํ์์์ด์ผ?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 51.36 |
ARC (25-shot) | 59.81 |
HellaSwag (10-shot) | 78.73 |
MMLU (5-shot) | 61.32 |
TruthfulQA (0-shot) | 51.24 |
Winogrande (5-shot) | 76.4 |
GSM8K (5-shot) | 24.18 |
DROP (3-shot) | 7.82 |
- Downloads last month
- 4,905
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.