File size: 1,336 Bytes
eb721ba
 
 
 
 
 
 
3ab0e29
eb721ba
 
 
 
 
 
 
 
165a631
eb721ba
 
 
d7cc879
91e9e79
eb721ba
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
base_model: alpindale/WizardLM-2-8x22B
---

# SorcererLM-8x22b-bf16

Oh boy, here we go. Low-rank LoRA on top of [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B) trained on 2 epochs of the (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from `WizardLM-2-8x22B` for RP purposes.

## Why LoRA?

Fully intentional. `WizardLM-2-8x22B` is smart by itself but its used vocabulary leaves much to be desired when it comes to RP. By training a low-rank LoRA on top of it to teach it some of Claude's writing style, we remedy that.

## Prompting

- Use the templates in [Quant-Cartel/Recommended-Settings](https://huggingface.co/Quant-Cartel/Recommended-Settings) under the `SorcererLM`-folder.
- Or Vicuna 1.1 and a sane context template. It's somewhat sensitive to samplers, I'd recommend Temperature 1, MinP 0.05 and a dash of DRY but YMMV. Shorter prompts seem to work better, too.

## Acknowledgments

- My [Cartel](https://huggingface.co/Quant-Cartel) bros, [Envoid](https://huggingface.co/Envoid) and especially [I^2](https://huggingface.co/InferenceIllusionist), for being amazing.
- My wallet for making sure I could do this without starving.

## Training

Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe). Configs included in the `train`-subfolder.