Edit model card

this is another DPO all-linear-parameter-fine-tuned MoE model for TomGrc/FusionNet_34Bx2_MoE_v0.1

it's trained on a H100 for one hour

DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. 

Metrics not test!

Downloads last month
73
Safetensors
Model size
60.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO

Quantizations
2 models