MiquMaid v2 2x70 DPO
Check out our blogpost about this model series Here! - Join our Discord server Here!
This model uses the Alpaca prompting format
Then, we have done a MoE, made of MiquMaid-v2-70B-DPO and Miqu-70B-DPO base, making the model using the finetune AND the base model for each token, working together.
The two model have been trained on DPO for uncensoring, more info on Miqu-70B-DPO here
We have seen a significant improvement, so we decided to share that, even if the model is very big.
Credits:
- Undi
- IkariDev
Description
This repo contains GGUF files of MiquMaid-v2-2x70B-DPO.
Training data used:
DPO training data used:
Custom format:
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
Others
Undi: If you want to support us, you can here.
IkariDev: Visit my retro/neocities style website please kek
- Downloads last month
- 97