Saxo's picture
Update README.md
e754ae5 verified
metadata
library_name: transformers
license: apache-2.0
base_model:
  - meta-llama/Llama-3.2-3B-Instruct
datasets:
  - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
  - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
  - >-
    Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
  - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
  - >-
    Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
  - Saxo/ko-news-corpus-1
  - Saxo/ko-news-corpus-2
  - Saxo/ko-news-corpus-3
  - Saxo/ko-news-corpus-4
  - Saxo/ko-news-corpus-5
  - Saxo/ko-news-corpus-6
  - Saxo/ko-news-corpus-7
  - Saxo/ko-news-corpus-8
  - Saxo/ko-news-corpus-9
  - maywell/ko_Ultrafeedback_binarized
  - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
  - lilacai/glaive-function-calling-v2-sharegpt
  - kuotient/gsm8k-ko
language:
  - ko
  - en
  - jp
  - cn
pipeline_tag: text-generation

Model Card for Model ID

AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ์ด์‚ฌ๊ฐ€
meta-llama/Llama-3.2-3B-Instruct ๋ฒ ์ด์Šค๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์„œ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด CPT(Continued Pre Trainig) ํ•œ ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ
5์ฒœ๋งŒ๊ฑด์˜ ํ•œ๊ธ€ ๋‰ด์Šค ํฌํ•จ ๋‹ค์–‘ํ•œ ํ•œ๊ธ€ ์ฝ”ํผ์Šค๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ „์ฒด ํŒŒ๋ผ๋ฏธํ„ฐ์ค‘ ์•ฝ 35%๋ฅผ ์žฌ ํŠœ๋‹ํ•œ ํ•œ๊ธ€ ๊ธฐ๋ณธ ๋ชจ๋ธ๋กœ SFT, DPO ๋ฅผ ํ†ตํ•ด ์šฉ๋„์— ๋งž๊ฒŒ ํŠœ๋‹ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
-ํ† ํฌ๋‚˜์ด์ €๋Š” ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ์„ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ
-128k-Context Window
-ํ•œ๊ธ€ Function Call ๋ฐ Tool Calling ์ง€์›
-Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์‚ฌ์šฉ


Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics
Korean language model CPT (Continued Pre Trainig) with 8 H100-80Gs using meta-llama/Llama-3.2-3B-Instruct base model
A basic Korean language model with about 35% of the total parameters re-tuned based on various Korean corpus including 50 million Korean news, which need to be customized through SFT and DPO.

-Tokenizer uses the base model without word expansion
-128k-Context Window
-Support for Korean Functioncall and Tool Calling
-Deepspeed Stage=3, use rslora and BAdam Layer Mode


www.linkbricks.com, www.linkbricks.vc