Edit model card

SorcererLM-22B

Because good things always come in threes!

SorcererLM-22B is here, rounding out the trinity of Mistral-Small-Instruct tunes from the Quant Cartel.

Prompt Format

Quantized Versions

Training

For starters this is a LORA tune on top of Mistral-Small-Instruct-2409 and not a pruned version of SorcererLM-8x22b.

Trained with a whole lot of love on 1 epoch of cleaned and deduped c2 logs. This model is 100% 'born-local', the result of roughly 27 hours and a little bit of patience on a single RTX 4080 SUPER.

As hyperparameters and dataset intentionally mirror ones used in the original Sorcerer 8x22b tune, this is considered its 'lite' counterpart aiming to provide the same bespoke conversational experience relative to its size and reduced hardware requirements.

While all three share the same Mistral-Small-Instruct base, in contrast to its sisters Mistral-Small-NovusKyver and Acolyte-22B this release did not SLERP the resulting model with the original in a 50/50 ratio post-training. Instead, alpha was dropped when the lora was merged with full precision weights in the final step.

Acknowledgments

  • First and foremost a huge thank you my brilliant teammates envoid and rAIfle. Special shout-out to rAIfle for critical last minute advice that got this one through the finish line
  • Props to unsloth as well for helping make this local tune possible
  • And of course, none of this would matter without users like you. Thank you :)

Safety

...

Downloads last month
22
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for InferenceIllusionist/SorcererLM-22B

Merges
7 models
Quantizations
6 models

Collection including InferenceIllusionist/SorcererLM-22B