Edit model card

Llama-2-ko-instruct-13B

Model Details

Datasets

  • Added some English to Korean translation data based on the KOpen-platypus and KoAlpaca datasets. Translations utilized AWS blog content that I translated myself.
  • Extracted only sentences longer than 100 characters and removed similar sentences with KoSimCSE (daekeun-ml/KoSimCSE-supervised-kobigbird-roberta-large)
  • Created category-specific prompts that encourage AI to answer despite hallucination for future RLHF (Reinforcement Learning From Human Feedback) or DPO (Direct Preference Optimization) tuning.

License

  • Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT

This model was created as a personal experiment, unrelated to the organization I work for.

Downloads last month
1,859
Safetensors
Model size
13.2B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train daekeun-ml/Llama-2-ko-instruct-13B

Spaces using daekeun-ml/Llama-2-ko-instruct-13B 2