DraftReasoner-2x7B-MoE-v0.1
Experimental 2-expert MoE merge using mlabonne/Marcoro14-7B-slerp as base.
- Marcoro14-7B-slerp as base.
- OpenHermes-2.5-Mistral-7B as model 0.
- WizardMath-7B-V1.1 as model 1.
Notes
Please evaluate before use in any application pipeline. Activation for Math part of the model would be 'math'
, 'reason'
, 'solve'
, 'count'
.
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.