Open-Assistant CodeLlama 13B SFT v10
This model is an Open-Assistant fine-tuning of Meta's CodeLlama 13B LLM.
Note: Due to the new RoPE Theta value (1e6 instead of 1e4), for correct results you must load this model with trust_remote_code=True
or use the latest main branch of Huggingface transformers (until version 4.33 is released).
Model Details
- Finetuned from: codellama/CodeLlama-7b-hf via epfLLM/Megatron-LLM
- Model type: Causal decoder-only transformer language model
- Language: English
- Weights & Biases training logs: 6123 steps, BS 64 run56_oa_llamacode
- Demo: Continuations for 250 random prompts (without system message)
- License: LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Contact: Open-Assistant Discord
Prompting / Prompt Template
Due to public demand (see survey) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's chatml standard prompt format. We hope that this leads to greater compatibility with chat inference/frontend applications.
Prompt dialogue template:
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
The model input can contain multiple conversation turns between user and assistant, e.g.
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
The model was partly trained with orca system messages.
For inference we recommend to use the official Llama2 system message:
<|im_start|>system
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<|im_end|>
Credits & Special Thanks
- Thanks to Meta AI for training and releasing the CodeLLlama model.
- Distributed training support was provided by EPFL's Machine Learning and Optimization Laboratory, and Natural Language Processing Lab.
- The open-source epfLLM/Megatron-LLM trainer was used for fine-tuning.
- rombodawg curated the LosslessMegaCodeTrainingV2_1m_Evol_Uncensored dataset.
- ehartford generated and published the ehartford/dolphin.
- shahules786 de-duped and filtered the Dolphin and Megacode dataset with a clustering/controid approach and generated orca-best & bestofmegacode.
- andreaskoepf prepared & orchestrated the training.
Ethical Considerations and Limitations
Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, the potential outputs of codellama-13b-oasst-sft-v10 cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of codellama-13b-oasst-sft-v10, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's Responsible Use Guide.
Configuration Details
The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: model/pretokenizer.
Pretokenizer Configuration
orca_megacode_oasst_best:
datasets:
- orca-chat:
val_split: 0.01
max_val_set: 1000
- bestofmegacode:
val_split: 0.01
max_val_set: 1000
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
#hf_dataset_name: OpenAssistant/oasst1
input_file_path: 2023-08-25_oasst_ready.jsonl.gz
top_k: 1
val_split: 0.025
output_dir: "output/orca_megacode_oasst_best"
filename_prefix: "orca_megacode_oasst_best"
min_assistant_tokens: 1
- Downloads last month
- 2,205