Notux 8x7B v1
Notux 8x7B v1 model (DPO fine-tune of Mixtral 8x7B Instruct v0.1) and datasets used. More information at https://github.com/argilla-io/notus
Text Generation • Updated • 74 • 165Note Full DPO fine-tuning of `mistralai/Mixtral-8x7b-Instruct-v0.1` using `argilla/ultrafeedback-binarized-preferences-cleaned` running in a VM with 8 x H100 80GB
argilla/ultrafeedback-binarized-preferences-cleaned
Viewer • Updated • 60.9k • 7k • 124Note Iteration on top of `argilla/ultrafeedback-binarized-preferences` but removing the TruthfulQA prompts that were introducing some data contamination as spotted by AllenAI, but keeping Argilla's approach on the data binarization Formatting: the dataset follows the same formatting as the one defined within the Alignment Handbook from HuggingFace H4
TheBloke/notux-8x7b-v1-GGUF
Text Generation • Updated • 730 • 5Note GGUF-compatible quantized variants of `argilla/notux-8x7b-v1` generated by TheBloke (thanks <3)
TheBloke/notux-8x7b-v1-AWQ
Text Generation • Updated • 4 • 3Note AWQ-compatible quantized variants of `argilla/notux-8x7b-v1` generated by TheBloke (thanks <3)
TheBloke/notux-8x7b-v1-GPTQ
Text Generation • Updated • 7 • 2Note GPTQ-compatible quantized variants of `argilla/notux-8x7b-v1` generated by TheBloke (thanks <3)