Model Card for StarCoderBase1B-Racket-SelfInstruct

Each commit to this repository has a checkpoint (one per epoch) for a fine-tuned StarCoderBase-1B. The dataset for fine-tuning is a Racket self-instruction dataset. As shown in Evaluation below, self-instruction was not effective, and this model is barely any better than StarCoderBase-1B.

Finetuning Dataset and Hyperparameters

Evaluation

The results on MultiPL-HumanEval-Racket are as follows:

Dataset,Pass@k,Estimate,NumProblems,MinCompletions,MaxCompletions
humaneval-rkt-checkpoint_1494-0.2-reworded,1,7.70,161,50,50
humaneval-rkt-checkpoint_1992-0.2-reworded,1,6.86,161,50,50
humaneval-rkt-checkpoint_2490-0.2-reworded,1,6.82,161,50,50
humaneval-rkt-checkpoint_2988-0.2-reworded,1,6.91,161,50,50
humaneval-rkt-checkpoint_498-0.2-reworded,1,6.19,161,50,50
humaneval-rkt-checkpoint_6973-0.2-reworded,1,6.53,161,50,50
humaneval-rkt-checkpoint_996-0.2-reworded,1,7.08,161,50,50
Downloads last month
19
Safetensors
Model size
1.14B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including nuprl/MultiPL-T-starcoderbase1b-racket-selfinstruct