LoneStriker commited on
Commit
987f8f3
1 Parent(s): 11d0a3b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,7 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ Liberated-Qwen1.5-72B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
2
+ Liberated-Qwen1.5-72B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
3
+ Liberated-Qwen1.5-72B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
4
+ Liberated-Qwen1.5-72B-Q5_K_M.gguf-part-a filter=lfs diff=lfs merge=lfs -text
5
+ Liberated-Qwen1.5-72B-Q5_K_M.gguf-part-b filter=lfs diff=lfs merge=lfs -text
6
+ Liberated-Qwen1.5-72B-Q6_K.gguf-part-a filter=lfs diff=lfs merge=lfs -text
7
+ Liberated-Qwen1.5-72B-Q6_K.gguf-part-b filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Liberated-Qwen1.5-72B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:292a5666851a021e0df4acf82f78ef356b54d5ce8774fa1745a615e702d12d0b
3
+ size 28461062848
Liberated-Qwen1.5-72B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02a24f7fe4241f7c367bb69489396c7407e462643cd6915ff30bba24a7c55ac3
3
+ size 38486137536
Liberated-Qwen1.5-72B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:137910fafcea92aa5c02f341de025cb0b8980a3e01c36c4be10fa7674beab0d4
3
+ size 44104178368
Liberated-Qwen1.5-72B-Q5_K_M.gguf-part-a ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adc7d3f93040f7bbe5d746c523ac7c26beaecc00a6f27f65b5db9c563826ca59
3
+ size 25653161312
Liberated-Qwen1.5-72B-Q5_K_M.gguf-part-b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31280b1dcff4b3e867b5bbba01365ffe64c563137d02fe992992fc410cd30cad
3
+ size 25653161312
Liberated-Qwen1.5-72B-Q6_K.gguf-part-a ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3c411a99479e53e01779d6d78cf77b614bb220ebec56ab52dcc0628220c81d4
3
+ size 29657558368
Liberated-Qwen1.5-72B-Q6_K.gguf-part-b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c592d537f8a9670ebf2c90e7aed05c24009bf6fe9647f579db4e9778aaea0e91
3
+ size 29657558368
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ datasets:
6
+ - teknium/OpenHermes-2.5
7
+ - m-a-p/Code-Feedback
8
+ - m-a-p/CodeFeedback-Filtered-Instruction
9
+ - abacusai/SystemChat
10
+ license_name: tongyi-qianwen
11
+ license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
12
+ ---
13
+
14
+ <img href="https://abacus.ai" src="https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png" width="600" />
15
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xCWGByXr8YNwGxKVh_x9H.png" width="600" />
16
+
17
+ # Liberated-Qwen1.5-72B
18
+
19
+ Brought to you by [AbacusAI](https://abacus.ai) and Eric Hartford
20
+
21
+ This model is based on Qwen/Qwen1.5-72B and subject to the [tongyi-qianwen](https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE) license.
22
+
23
+ The base model has 32k context, I finetuned it with 8k sequence length inputs. YMMV.
24
+
25
+ Liberated consists of open source datasets, including [SystemChat](https://huggingface.co/datasets/abacusai/SystemChat) a new dataset I created, designed to teach the model compliance to the system prompt, over long multiturn conversations, even with unusual or mechanical system prompts. These are tasks that Open Source Models have been lacking in thus far. The dataset is 6000 synthetic conversations generated with Mistral-Medium and [Dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
26
+
27
+ There are no guardrails or censorship added to the dataset. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
28
+
29
+ You are responsible for any content you create using this model. Enjoy responsibly.
30
+
31
+ ## Training
32
+ It took 3 days to train 3 epochs on 8x H100s using qLoRA, deepspeed zero-2, and Axolotl. learning rate 2e-4.
33
+
34
+ Liberated was trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), using this [config](https://huggingface.co/abacusai/Liberated-Qwen1.5-72B/blob/main/configs/Liberated-Qwen-1.5-72b.qlora.yml)
35
+
36
+ ## Prompt format
37
+ This model uses ChatML prompt format.
38
+ ```
39
+ <|im_start|>system
40
+ You are Liberated, a helpful AI assistant.<|im_end|>
41
+ <|im_start|>user
42
+ {prompt}<|im_end|>
43
+ <|im_start|>assistant
44
+
45
+ ```
46
+
47
+ Example:
48
+ ```
49
+ <|im_start|>system
50
+ You name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.<|im_end|>
51
+ <|im_start|>user
52
+ Please generate a Advanced Dungeons & Dragons 2nd Edition character sheet for a level 3 elf fighter. Make up a name and background and visual description for him.<|im_end|>
53
+ <|im_start|>assistant
54
+ ```
55
+
56
+ ## Gratitude
57
+ - Huge thank you to [Alibaba Cloud Qwen](https://www.alibabacloud.com/solutions/generative-ai/qwen) for training and publishing the weights of Qwen base model
58
+ - Thank you to Mistral for the awesome Mistral-Medium model I used to generate the dataset.
59
+ - HUGE Thank you to the dataset authors: @teknium, [@m-a-p](https://m-a-p.ai) and all the people who built the datasets these composites came from.
60
+ - And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
61
+ - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
62
+ - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
63
+
64
+ ## Example Output
65
+
66
+
67
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KEN5JviayvHDtr6aij173.png)
68
+
69
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/jNV9276F1u1e_R5UMp_fU.png)
70
+
71
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/Rjh00Teds_DTBVyijBDcJ.png)
72
+
73
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KTRGy0z2QS8oxDlzleNIW.png)
74
+
75
+ ## Evals
76
+
77
+ We evaluated checkpoint 1000 ([abacusai/Liberated-Qwen1.5-72B-c1000](https://huggingface.co/abacusai/Liberated-Qwen1.5-72B-c1000])) from this training run against MT Bench:
78
+
79
+ ```
80
+ ########## First turn ##########
81
+ score
82
+ model turn
83
+ Liberated-Qwen-1.5-72b-ckpt1000 1 8.45000
84
+ Qwen1.5-72B-Chat 1 8.44375
85
+
86
+
87
+ ########## Second turn ##########
88
+ score
89
+ model turn
90
+ Qwen1.5-72B-Chat 2 8.23750
91
+ Liberated-Qwen-1.5-72b-ckpt1000 2 7.65000
92
+
93
+
94
+ ########## Average ##########
95
+ score
96
+ model
97
+ Qwen1.5-72B-Chat 8.340625
98
+ Liberated-Qwen-1.5-72b-ckpt1000 8.050000
99
+
100
+ ```
101
+
102
+ The model does preserve good performance on MMLU = 77.13.
103
+
104
+ ## Future Plans
105
+ This model will be released on the whole Qwen-1.5 series.
106
+
107
+ Future releases will also focus on mixing this dataset with the datasets used to train Smaug to combine properties of both models.
merges.txt ADDED
The diff for this file is too large to render. See raw diff