Weyaxi commited on
Commit
8134a64
1 Parent(s): 09ebd42

pre model card

Browse files
Files changed (1) hide show
  1. README.md +287 -0
README.md ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - axolotl
5
+ - generated_from_trainer
6
+ - Mistral
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - science
13
+ - physics
14
+ - chemistry
15
+ - biology
16
+ - math
17
+ base_model: alpindale/Mistral-7B-v0.2-hf
18
+ datasets:
19
+ - allenai/ai2_arc
20
+ - camel-ai/physics
21
+ - camel-ai/chemistry
22
+ - camel-ai/biology
23
+ - camel-ai/math
24
+ - metaeval/reclor
25
+ - openbookqa
26
+ - mandyyyyii/scibench
27
+ - derek-thomas/ScienceQA
28
+ - TIGER-Lab/ScienceEval
29
+ - jondurbin/airoboros-3.2
30
+ - LDJnr/Capybara
31
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
32
+ - STEM-AI-mtl/Electrical-engineering
33
+ - knowrohit07/saraswati-stem
34
+ - sablo/oasst2_curated
35
+ - lmsys/lmsys-chat-1m
36
+ - TIGER-Lab/MathInstruct
37
+ - bigbio/med_qa
38
+ - meta-math/MetaMathQA-40K
39
+ - openbookqa
40
+ - piqa
41
+ - metaeval/reclor
42
+ - derek-thomas/ScienceQA
43
+ - scibench
44
+ - sciq
45
+ - Open-Orca/SlimOrca
46
+ - migtissera/Synthia-v1.3
47
+ - TIGER-Lab/ScienceEval
48
+ - allenai/WildChat
49
+ - microsoft/orca-math-word-problems-200k
50
+ - openchat/openchat_sharegpt4_dataset
51
+ - teknium/GPTeacher-General-Instruct
52
+ - m-a-p/CodeFeedback-Filtered-Instruction
53
+ - totally-not-an-llm/EverythingLM-data-V3
54
+ - HuggingFaceH4/no_robots
55
+ - OpenAssistant/oasst_top1_2023-08-25
56
+ - WizardLM/WizardLM_evol_instruct_70k
57
+ language:
58
+ - en
59
+ ---
60
+
61
+ # 🔬 Einstein-v6-7B
62
+
63
+ This model is a full fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on diverse datasets.
64
+
65
+ This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
66
+
67
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
68
+
69
+ <details><summary>See axolotl config</summary>
70
+
71
+ axolotl version: `0.4.0`
72
+ ```yaml
73
+ base_model: alpindale/Mistral-7B-v0.2-hf
74
+ model_type: MistralForCausalLM
75
+ tokenizer_type: LlamaTokenizer
76
+ is_mistral_derived_model: true
77
+
78
+ load_in_8bit: false
79
+ load_in_4bit: false
80
+ strict: false
81
+
82
+ chat_template: chatml
83
+ datasets:
84
+ - path: data/merged_all.json
85
+ ds_type: json
86
+ type: alpaca
87
+ conversation: chatml
88
+
89
+ - path: data/gpteacher-instruct-special-alpaca.json
90
+ ds_type: json
91
+ type: gpteacher
92
+ conversation: chatml
93
+
94
+ - path: data/wizardlm_evol_instruct_70k_random_half.json
95
+ ds_type: json
96
+ type: alpaca
97
+ conversation: chatml
98
+
99
+ - path: data/capybara_sharegpt.json
100
+ ds_type: json
101
+ type: sharegpt
102
+ conversation: chatml
103
+
104
+ - path: data/synthia-v1.3_sharegpt_12500.json
105
+ ds_type: json
106
+ type: sharegpt
107
+ conversation: chatml
108
+
109
+ - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json
110
+ ds_type: json
111
+ type: sharegpt
112
+ conversation: chatml
113
+
114
+ - path: data/slimorca_dedup_filtered_95k_sharegpt.json
115
+ ds_type: json
116
+ type: sharegpt
117
+ conversation: chatml
118
+
119
+ - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json
120
+ ds_type: json
121
+ type: sharegpt
122
+ conversation: chatml
123
+
124
+ - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json
125
+ ds_type: json
126
+ type: sharegpt
127
+ strict: false
128
+ conversation: chatml
129
+
130
+ - path: data/pippa_bagel_repo_3k_sharegpt.json
131
+ ds_type: json
132
+ type: sharegpt
133
+ conversation: chatml
134
+
135
+ - path: data/gpt4_data_lmys_1m_sharegpt.json
136
+ ds_type: json
137
+ type: sharegpt
138
+ conversation: chatml
139
+
140
+ - path: data/sharegpt_gpt4_english.json
141
+ ds_type: json
142
+ type: sharegpt
143
+ conversation: chatml
144
+
145
+ - path: data/no_robots_sharegpt.json
146
+ ds_type: json
147
+ type: sharegpt
148
+ strict: false
149
+ conversation: chatml
150
+
151
+ - path: data/oasst_top1_from_fusechatmixture_sharegpt.json
152
+ ds_type: json
153
+ type: sharegpt
154
+ strict: false
155
+ conversation: chatml
156
+
157
+ - path: data/everythinglm-data-v3_sharegpt.json
158
+ ds_type: json
159
+ type: sharegpt
160
+ strict: false
161
+ conversation: chatml
162
+
163
+ dataset_prepared_path: last_run_prepared
164
+ # val_set_size: 0.005
165
+ val_set_size: 0.0
166
+
167
+ do_bench_eval: true
168
+
169
+ output_dir: ./Einstein-v6-7B-model
170
+
171
+ sequence_len: 8192
172
+ sample_packing: true
173
+ pad_to_sequence_len: true
174
+ eval_sample_packing: false
175
+
176
+ wandb_project: Einstein
177
+ wandb_entity:
178
+ wandb_watch:
179
+ wandb_name:
180
+ wandb_log_model:
181
+ hub_model_id: Weyaxi/Einstein-v6-7B
182
+
183
+ save_safetensors: true
184
+
185
+ gradient_accumulation_steps: 4
186
+ micro_batch_size: 1
187
+ num_epochs: 2
188
+ optimizer: adamw_bnb_8bit
189
+ lr_scheduler: cosine
190
+ learning_rate: 0.000005
191
+
192
+ train_on_inputs: false
193
+ group_by_length: false
194
+ bf16: true
195
+ fp16: false
196
+ tf32: false
197
+
198
+ gradient_checkpointing: true
199
+ early_stopping_patience:
200
+ resume_from_checkpoint:
201
+ local_rank:
202
+ logging_steps: 1
203
+ xformers_attention:
204
+ flash_attention: true
205
+
206
+ warmup_steps: 10
207
+ evals_per_epoch: 3 # changed
208
+ eval_table_size:
209
+ eval_table_max_new_tokens: 128
210
+ saves_per_epoch: 2 # changed
211
+ debug:
212
+
213
+ deepspeed: zero3_bf16.json
214
+ weight_decay: 0.0
215
+ fsdp:
216
+ fsdp_config:
217
+ special_tokens:
218
+ bos_token: "<s>"
219
+ eos_token: "<|im_end|>"
220
+ unk_token: "<unk>"
221
+ tokens:
222
+ - "<|im_start|>"
223
+ ```
224
+
225
+ </details><br>
226
+
227
+ # 💬 Prompt Template
228
+
229
+ You can use this prompt template while using the model:
230
+
231
+ ### ChatML
232
+
233
+ ```
234
+ <|im_start|>system
235
+ {system}<|im_end|>
236
+ <|im_start|>user
237
+ {user}<|im_end|>
238
+ <|im_start|>assistant
239
+ {asistant}<|im_end|>
240
+ ```
241
+
242
+ This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
243
+ `tokenizer.apply_chat_template()` method:
244
+
245
+ ```python
246
+ messages = [
247
+ {"role": "system", "content": "You are helpful AI asistant."},
248
+ {"role": "user", "content": "Hello!"}
249
+ ]
250
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
251
+ model.generate(**gen_input)
252
+ ```
253
+
254
+ # 🔄 Quantizationed versions
255
+
256
+ Quantizationed versions of this model is currently not available. It will be available soon :)
257
+
258
+
259
+ # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
260
+
261
+
262
+ # 🤖 Additional information about training
263
+
264
+ This model is full fine-tuned for 2 epoch.
265
+
266
+ Total number of steps was 2412.
267
+
268
+ <details><summary>Loss graph</summary>
269
+
270
+
271
+ </details><br>
272
+
273
+ # 🤝 Acknowledgments
274
+
275
+ Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
276
+
277
+ Thanks to all the dataset authors mentioned in the datasets section.
278
+
279
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
280
+
281
+ Thanks to all open source AI community.
282
+
283
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
284
+
285
+ If you would like to support me:
286
+
287
+ [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)