|
--- |
|
license: apache-2.0 |
|
base_model: alpindale/WizardLM-2-8x22B |
|
--- |
|
|
|
# SorcererLM-8x22b-bf16 |
|
|
|
Oh boy, here we go. Low-rank LoRA on top of [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B) trained on 2 epochs of the (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from `WizardLM-2-8x22B` for RP purposes. |
|
|
|
## Why LoRA? |
|
|
|
Fully intentional. `WizardLM-2-8x22B` is smart by itself but its used vocabulary leaves much to be desired when it comes to RP. By training a low-rank LoRA on top of it to teach it some of Claude's writing style, we remedy that. |
|
|
|
## Prompting |
|
|
|
- Use the templates in [Quant-Cartel/Recommended-Settings](https://huggingface.co/Quant-Cartel/Recommended-Settings) under the `SorcererLM`-folder. |
|
- Or Vicuna 1.1 and a sane context template. It's sensitive to samplers, I'd recommend Temperature 1, MinP 0.05 and a dash of DRY but YMMV. |
|
|
|
## Acknowledgments |
|
|
|
- My [Cartel](https://huggingface.co/Quant-Cartel) bros, [Envoid](https://huggingface.co/Envoid) and especially [I^2](https://huggingface.co/InferenceIllusionist), for being amazing. |
|
- My wallet for making sure I could do this without starving. |
|
|
|
## Training |
|
|
|
Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe). Configs included in the `train`-subfolder. |
|
|
|
|