DeepSeek-Coder-V2-Lite-Base finetuned for 0.25 epochs on adamo1139/ise-uiuc_Magicoder-Evol-Instruct-110K-ShareGPT via llama-factory at 3000ctx with qlora, rank 32 and alpha 32.
Prompt format is ChatML but ChatML-specific tokens are not in the tokenizer, so it's sometimes spilling random tokens. Definitely something to fix in the next version.
It's an early WIP, unless you are dying to try DeepSeek-Coder-V2-Lite finetunes I suggest you don't use it :)
- Downloads last month
- 3