library_name: transformers
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-3B-Instruct
datasets:
- Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
- Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
- >-
Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
- >-
Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
- >-
Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
- >-
Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
- >-
Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
- Saxo/ko-news-corpus-1
- Saxo/ko-news-corpus-2
- Saxo/ko-news-corpus-3
- Saxo/ko-news-corpus-4
- Saxo/ko-news-corpus-5
- Saxo/ko-news-corpus-6
- Saxo/ko-news-corpus-7
- Saxo/ko-news-corpus-8
- Saxo/ko-news-corpus-9
- maywell/ko_Ultrafeedback_binarized
- youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
- lilacai/glaive-function-calling-v2-sharegpt
- kuotient/gsm8k-ko
language:
- ko
- en
- jp
- cn
pipeline_tag: text-generation
Model Card for Model ID
AI ์ ๋น
๋ฐ์ดํฐ ๋ถ์ ์ ๋ฌธ ๊ธฐ์
์ธ Linkbricks์ ๋ฐ์ดํฐ์ฌ์ด์ธํฐ์คํธ์ธ ์ง์ค์ฑ(Saxo) ์ด์ฌ๊ฐ
meta-llama/Llama-3.2-3B-Instruct ๋ฒ ์ด์ค๋ชจ๋ธ์ ์ฌ์ฉํด์ H100-80G 8๊ฐ๋ฅผ ํตํด CPT(Continued Pre Trainig) ํ ํ๊ธ ์ธ์ด ๋ชจ๋ธ
5์ฒ๋ง๊ฑด์ ํ๊ธ ๋ด์ค ํฌํจ ๋ค์ํ ํ๊ธ ์ฝํผ์ค๋ฅผ ๊ธฐ์ค์ผ๋ก ์ ์ฒด ํ๋ผ๋ฏธํฐ์ค ์ฝ 35%๋ฅผ ์ฌ ํ๋ํ ํ๊ธ ๊ธฐ๋ณธ ๋ชจ๋ธ๋ก SFT, DPO ๋ฅผ ํตํด ์ฉ๋์ ๋ง๊ฒ ํ๋ํ๋ฉด ๋ฉ๋๋ค.
-ํ ํฌ๋์ด์ ๋ ํ์ฅ ์์ด ๋ฒ ์ด์ค ๋ชจ๋ธ์ ๊ทธ๋๋ก ์ฌ์ฉ
-128k-Context Window
-ํ๊ธ Function Call ๋ฐ Tool Calling ์ง์
-Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์ฌ์ฉ
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics
Korean language model CPT (Continued Pre Trainig) with 8 H100-80Gs using meta-llama/Llama-3.2-3B-Instruct base model
A basic Korean language model with about 35% of the total parameters re-tuned based on various Korean corpus including 50 million Korean news, which need to be customized through SFT and DPO.
-Tokenizer uses the base model without word expansion
-128k-Context Window
-Support for Korean Functioncall and Tool Calling
-Deepspeed Stage=3, use rslora and BAdam Layer Mode