aashish1904 commited on
Commit
6f70bbd
โ€ข
1 Parent(s): b1b0999

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ base_model: mistralai/Mistral-Nemo-Instruct-2407
7
+ datasets:
8
+ - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
9
+ - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
10
+ - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
11
+ - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
12
+ - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
13
+ - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
14
+ - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
15
+ - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
16
+ - Saxo/ko-news-corpus-1
17
+ - Saxo/ko-news-corpus-2
18
+ - Saxo/ko-news-corpus-3
19
+ - Saxo/ko-news-corpus-4
20
+ - Saxo/ko-news-corpus-5
21
+ - Saxo/ko-news-corpus-6
22
+ - Saxo/ko-news-corpus-7
23
+ - Saxo/ko-news-corpus-8
24
+ - Saxo/ko-news-corpus-9
25
+ - maywell/ko_Ultrafeedback_binarized
26
+ - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
27
+ - lilacai/glaive-function-calling-v2-sharegpt
28
+ - kuotient/gsm8k-ko
29
+ language:
30
+ - ko
31
+ - en
32
+ - jp
33
+ - cn
34
+ pipeline_tag: text-generation
35
+
36
+ ---
37
+
38
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
39
+
40
+ # QuantFactory/Linkbricks-Horizon-AI-Korean-Advanced-12B-GGUF
41
+ This is quantized version of [Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B) created using llama.cpp
42
+
43
+ # Original Model Card
44
+
45
+
46
+ # Model Card for Model ID
47
+
48
+ <div align="center">
49
+ <img src="https://www.linkbricks.com/wp-content/uploads/2022/03/%E1%84%85%E1%85%B5%E1%86%BC%E1%84%8F%E1%85%B3%E1%84%87%E1%85%B3%E1%84%85%E1%85%B5%E1%86%A8%E1%84%89%E1%85%B3%E1%84%85%E1%85%A9%E1%84%80%E1%85%A9-2-1024x804.png" />
50
+ </div>
51
+
52
+
53
+ AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ์ด์‚ฌ๊ฐ€ <br>
54
+ Mistral-Nemo-Instruct-2407 ๋ฒ ์ด์Šค๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์„œ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด CPT(Continue-Pretraining)->SFP->DPO ํ•œ ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ<br>
55
+ ์ฒœ๋งŒ๊ฑด์˜ ํ•œ๊ธ€ ๋‰ด์Šค ์ฝ”ํผ์Šค๋ฅผ ๊ธฐ์ค€์œผ๋กœ ๋‹ค์–‘ํ•œ ํ…Œ์Šคํฌ๋ณ„ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ์ˆ˜ํ•™ ๋ฐ ๋…ผ๋ฆฌํŒ๋‹จ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋‹ค.<br>
56
+ -ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ<br>
57
+ -๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๊ณผ ์ž‘๋ฌธ, ์ˆ˜ํ•™, ๋…ผ๋ฆฌํŒ๋‹จ ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ<br>
58
+ -128k-Context Window<br>
59
+ -ํ•œ๊ธ€ Function Call ๋ฐ Tool Calling ์ง€์› <br>
60
+ -Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์‚ฌ์šฉ <br><br><br>
61
+
62
+ Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br>
63
+ CPT(Continue-Pretraining)->SFP->DPO training model based on Mistral-Nemo-Instruct-2407 through 8 H100-80Gs as a Korean language model <br>
64
+ It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 10M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
65
+ -Tokenizer uses the base model without word expansion<br>
66
+ -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, amth and decision making<br>
67
+ -128k-Context Window<br>
68
+ -Support for Korean Functioncall and Tool Calling<br>
69
+ -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
70
+ <br><br>
71
+
72
+ <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
73
+