Edit model card

Experimental de-slopped, de-aligned, EQ-tuned model trained via ORPO on 4k synthetic pairs on a single A100 for 3 epochs; inspired by Gutenberg-DPO.

Despite success on the de-slopping front, I seem to have totalled the model's prefrontal cortex in the process. So it goes. Training data is everything.

Downloads last month
9
Inference API
Unable to determine this model's library. Check the docs .

Model tree for steinzer-narayan/caudwell-9b-v0

Quantizations
1 model