|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- ko |
|
--- |
|
|
|
# komt : korean multi task instruction tuning model |
|
![multi task instruction tuning.jpg](https://github.com/davidkim205/komt/assets/16680469/c7f6ade7-247e-4b62-a94f-47e19abea68e) |
|
|
|
Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. |
|
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. |
|
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs). |
|
|
|
## Model Details |
|
|
|
* **Model Developers** : davidkim(changyeon kim) |
|
* **Repository** : https://github.com/davidkim205/komt |
|
* **base mode** : Edentns/DataVortexS-10.7B-dpo-v1.11 |
|
* **dataset** : comp-341k |
|
|
|
|
|
|