Edit model card

DolphinCoder StarCoder2 7b 🐬

sponsored by latitude.sh.

Discord Discord: https://discord.gg/cognitivecomputations

This model is based on StarCoder2-7b and is subject to bigcode-openrail-m license.

This Dolphin is really good at coding, I trained with a lot of coding data.

This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

Training

It took 2 days to train 3 epochs on 8x L40S's using qLoRA and Axolotl

Prompt format: This model uses ChatML prompt format.

<|im_start|>system
You are DolphinCoder, a helpful AI programming assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Example:

<|im_start|>system
You are DolphinCoder, a master at software engineering and coding in any programming language.
<|im_start|>user
Please write me a program in golang that parses all the lines in a file, and reverses them character-wise, and saves it to a new file.
<|im_start|>assistant

Quantized models

Gratitude

  • This model was made possible by the generous sponsorship of latitude.sh.
  • Welcome Microsoft to Open Source AI! Thank you for the Orca-Math Dataset!
  • Huge thank you to BigCode for training and publishing the weights of StarCoder2
  • HUGE Thank you to the dataset authors: @ise-uiuc, @teknium, @m-a-p
  • And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
  • Built with Axolotl
  • Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.

Example Output

If you would like to financially support my efforts

swag

Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cognitivecomputations/dolphincoder-starcoder2-7b

Quantizations
3 models

Datasets used to train cognitivecomputations/dolphincoder-starcoder2-7b