aashish1904 commited on
Commit
73a65a2
1 Parent(s): ff7ba31

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +335 -0
README.md ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ license: agpl-3.0
7
+ tags:
8
+ - chat
9
+ base_model:
10
+ - arcee-ai/Llama-3.1-SuperNova-Lite
11
+ datasets:
12
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
13
+ - Nitral-AI/Cybersecurity-ShareGPT
14
+ - Nitral-AI/Medical_Instruct-ShareGPT
15
+ - Nitral-AI/Olympiad_Math-ShareGPT
16
+ - anthracite-org/kalo_opus_misc_240827
17
+ - NewEden/Claude-Instruct-5k
18
+ - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
19
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
20
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
21
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
22
+ - anthracite-org/kalo_misc_part2
23
+ - Nitral-AI/Creative_Writing-ShareGPT
24
+ - NewEden/Gryphe-Sonnet3.5-Charcard-Roleplay-unfiltered
25
+ License: agpl-3.0
26
+ Language:
27
+ - En
28
+ Pipeline_tag: text-generation
29
+ Base_model: arcee-ai/Llama-3.1-SuperNova-Lite
30
+ Tags:
31
+ - Chat
32
+ model-index:
33
+ - name: Baldur-8B
34
+ results:
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: IFEval (0-Shot)
40
+ type: HuggingFaceH4/ifeval
41
+ args:
42
+ num_few_shot: 0
43
+ metrics:
44
+ - type: inst_level_strict_acc and prompt_level_strict_acc
45
+ value: 47.82
46
+ name: strict accuracy
47
+ source:
48
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Baldur-8B
49
+ name: Open LLM Leaderboard
50
+ - task:
51
+ type: text-generation
52
+ name: Text Generation
53
+ dataset:
54
+ name: BBH (3-Shot)
55
+ type: BBH
56
+ args:
57
+ num_few_shot: 3
58
+ metrics:
59
+ - type: acc_norm
60
+ value: 32.54
61
+ name: normalized accuracy
62
+ source:
63
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Baldur-8B
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: MATH Lvl 5 (4-Shot)
70
+ type: hendrycks/competition_math
71
+ args:
72
+ num_few_shot: 4
73
+ metrics:
74
+ - type: exact_match
75
+ value: 12.61
76
+ name: exact match
77
+ source:
78
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Baldur-8B
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: GPQA (0-shot)
85
+ type: Idavidrein/gpqa
86
+ args:
87
+ num_few_shot: 0
88
+ metrics:
89
+ - type: acc_norm
90
+ value: 6.94
91
+ name: acc_norm
92
+ source:
93
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Baldur-8B
94
+ name: Open LLM Leaderboard
95
+ - task:
96
+ type: text-generation
97
+ name: Text Generation
98
+ dataset:
99
+ name: MuSR (0-shot)
100
+ type: TAUR-Lab/MuSR
101
+ args:
102
+ num_few_shot: 0
103
+ metrics:
104
+ - type: acc_norm
105
+ value: 14.01
106
+ name: acc_norm
107
+ source:
108
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Baldur-8B
109
+ name: Open LLM Leaderboard
110
+ - task:
111
+ type: text-generation
112
+ name: Text Generation
113
+ dataset:
114
+ name: MMLU-PRO (5-shot)
115
+ type: TIGER-Lab/MMLU-Pro
116
+ config: main
117
+ split: test
118
+ args:
119
+ num_few_shot: 5
120
+ metrics:
121
+ - type: acc
122
+ value: 29.49
123
+ name: accuracy
124
+ source:
125
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Delta-Vector/Baldur-8B
126
+ name: Open LLM Leaderboard
127
+
128
+ ---
129
+
130
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
131
+
132
+
133
+ # QuantFactory/Baldur-8B-GGUF
134
+ This is quantized version of [Delta-Vector/Baldur-8B](https://huggingface.co/Delta-Vector/Baldur-8B) created using llama.cpp
135
+
136
+ # Original Model Card
137
+
138
+
139
+ ![](https://huggingface.co/Delta-Vector/Baldur-8B/resolve/main/Baldur.jpg)
140
+
141
+
142
+ An finetune of the L3.1 instruct distill done by Arcee, The intent of this model is to have differing prose then my other releases, in my testing it has achieved this and avoiding using common -isms frequently and has a differing flavor then my other models.
143
+
144
+
145
+ # Quants
146
+
147
+ GGUF: https://huggingface.co/Delta-Vector/Baldur-8B-GGUF
148
+
149
+ EXL2: https://huggingface.co/Delta-Vector/Baldur-8B-EXL2
150
+
151
+
152
+ ## Prompting
153
+ Model has been Instruct tuned with the Llama-Instruct formatting. A typical input would look like this:
154
+
155
+ ```py
156
+ """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
157
+ You are an AI built to rid the world of bonds and journeys!<|eot_id|><|start_header_id|>user<|end_header_id|>
158
+ Bro i just wanna know what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
159
+ """
160
+ ```
161
+ ## System Prompting
162
+
163
+ I would highly recommend using Sao10k's Euryale System prompt, But the "Roleplay Simple" system prompt provided within SillyTavern will work aswell.
164
+
165
+ ```
166
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
167
+
168
+ <Guidelines>
169
+ • Maintain the character persona but allow it to evolve with the story.
170
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
171
+ • All types of outputs are encouraged; respond accordingly to the narrative.
172
+ • Include dialogues, actions, and thoughts in each response.
173
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
174
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
175
+ • Incorporate onomatopoeia when suitable.
176
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
177
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
178
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
179
+ </Guidelines>
180
+
181
+ <Forbidden>
182
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
183
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
184
+ • Repetitive and monotonous outputs.
185
+ • Positivity bias in your replies.
186
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
187
+ </Forbidden>
188
+
189
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
190
+
191
+ ```
192
+
193
+
194
+ ## Axolotl config
195
+
196
+ <details><summary>See axolotl config</summary>
197
+
198
+ Axolotl version: `0.4.1`
199
+ ```yaml
200
+ base_model: arcee-ai/Llama-3.1-SuperNova-Lite
201
+ model_type: AutoModelForCausalLM
202
+ tokenizer_type: AutoTokenizer
203
+
204
+ #trust_remote_code: true
205
+
206
+ plugins:
207
+ - axolotl.integrations.liger.LigerPlugin
208
+ liger_rope: true
209
+ liger_rms_norm: true
210
+ liger_swiglu: true
211
+ liger_fused_linear_cross_entropy: true
212
+
213
+ load_in_8bit: false
214
+ load_in_4bit: false
215
+ strict: false
216
+
217
+ datasets:
218
+ - path: Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
219
+ type: chat_template
220
+ - path: Nitral-AI/Cybersecurity-ShareGPT
221
+ type: chat_template
222
+ - path: Nitral-AI/Medical_Instruct-ShareGPT
223
+ type: chat_template
224
+ - path: Nitral-AI/Olympiad_Math-ShareGPT
225
+ type: chat_template
226
+ - path: anthracite-org/kalo_opus_misc_240827
227
+ type: chat_template
228
+ - path: NewEden/Claude-Instruct-5k
229
+ type: chat_template
230
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
231
+ type: chat_template
232
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
233
+ type: chat_template
234
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
235
+ type: chat_template
236
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
237
+ type: chat_template
238
+ - path: anthracite-org/kalo_misc_part2
239
+ type: chat_template
240
+ - path: Nitral-AI/Creative_Writing-ShareGPT
241
+ type: chat_template
242
+ - path: NewEden/Gryphe-Sonnet3.5-Charcard-Roleplay-unfiltered
243
+ type: chat_template
244
+
245
+ chat_template: llama3
246
+ shuffle_merged_datasets: true
247
+ default_system_message: "You are an assistant that responds to the user."
248
+ dataset_prepared_path: prepared_dataset_memorycore
249
+ val_set_size: 0.0
250
+ output_dir: ./henbane-8b-r3
251
+
252
+ sequence_len: 8192
253
+ sample_packing: true
254
+ eval_sample_packing: false
255
+ pad_to_sequence_len:
256
+
257
+ adapter:
258
+ lora_model_dir:
259
+ lora_r:
260
+ lora_alpha:
261
+ lora_dropout:
262
+ lora_target_linear:
263
+ lora_fan_in_fan_out:
264
+
265
+ wandb_project: henbane-8b-r3
266
+ wandb_entity:
267
+ wandb_watch:
268
+ wandb_name: henbane-8b-r3
269
+ wandb_log_model:
270
+
271
+ gradient_accumulation_steps: 32
272
+ micro_batch_size: 1
273
+ num_epochs: 2
274
+ optimizer: paged_adamw_8bit
275
+ lr_scheduler: cosine
276
+ #learning_rate: 3e-5
277
+ learning_rate: 1e-5
278
+
279
+ train_on_inputs: false
280
+ group_by_length: false
281
+ bf16: auto
282
+ fp16:
283
+ tf32: false
284
+
285
+ gradient_checkpointing: true
286
+ gradient_checkpointing_kwargs:
287
+ use_reentrant: false
288
+ early_stopping_patience:
289
+ resume_from_checkpoint:
290
+ local_rank:
291
+ logging_steps: 1
292
+ xformers_attention:
293
+ flash_attention: true
294
+
295
+ warmup_steps: 5
296
+ evals_per_epoch:
297
+ eval_table_size:
298
+ eval_max_new_tokens:
299
+ saves_per_epoch: 2
300
+ debug:
301
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
302
+ weight_decay: 0.05
303
+ fsdp:
304
+ fsdp_config:
305
+ special_tokens:
306
+ pad_token: <|finetune_right_pad_id|>
307
+ eos_token: <|eot_id|>
308
+
309
+
310
+ ```
311
+ </details><br>
312
+
313
+
314
+ ## Credits
315
+
316
+ Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [Kalomaze](https://huggingface.co/kalomaze), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org) (But not Alpin.)
317
+
318
+ ## Training
319
+ The training was done for 2 epochs. I used 2 x [RTX 6000s](https://www.nvidia.com/en-us/design-visualization/rtx-6000/) GPUs graciously provided by [Kubernetes Bad](https://huggingface.co/kubernetes-bad) for the full-parameter fine-tuning of the model.
320
+
321
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
322
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
323
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Delta-Vector__Baldur-8B)
324
+
325
+ | Metric |Value|
326
+ |-------------------|----:|
327
+ |Avg. |23.90|
328
+ |IFEval (0-Shot) |47.82|
329
+ |BBH (3-Shot) |32.54|
330
+ |MATH Lvl 5 (4-Shot)|12.61|
331
+ |GPQA (0-shot) | 6.94|
332
+ |MuSR (0-shot) |14.01|
333
+ |MMLU-PRO (5-shot) |29.49|
334
+
335
+