Weyaxi commited on
Commit
c84ec95
1 Parent(s): 4413631

mode card pre 1

Browse files
Files changed (1) hide show
  1. README.md +113 -46
README.md CHANGED
@@ -1,18 +1,73 @@
1
  ---
 
 
2
  license: other
3
- base_model: meta-llama/Meta-Llama-3-8B
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
- model-index:
8
- - name: Einstein-v6.1-Llama3-8B
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
 
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
14
 
15
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
16
  <details><summary>See axolotl config</summary>
17
 
18
  axolotl version: `0.4.0`
@@ -164,61 +219,73 @@ special_tokens:
164
  pad_token: <|end_of_text|> # changed
165
  tokens:
166
  - "<|im_start|>"
 
 
 
 
 
 
 
 
167
 
 
 
 
 
 
 
 
168
  ```
169
 
170
- </details><br>
 
 
 
 
 
 
 
 
 
 
 
 
171
 
172
- # Einstein-v6.1-Llama3-8B
173
 
174
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
175
- It achieves the following results on the evaluation set:
176
- - Loss: 0.5786
177
 
178
- ## Model description
179
 
180
- More information needed
181
 
182
- ## Intended uses & limitations
183
 
184
- More information needed
185
 
186
- ## Training and evaluation data
187
 
188
- More information needed
189
 
190
- ## Training procedure
191
 
192
- ### Training hyperparameters
193
 
194
- The following hyperparameters were used during training:
195
- - learning_rate: 5e-06
196
- - train_batch_size: 1
197
- - eval_batch_size: 1
198
- - seed: 42
199
- - distributed_type: multi-GPU
200
- - num_devices: 9
201
- - gradient_accumulation_steps: 4
202
- - total_train_batch_size: 36
203
- - total_eval_batch_size: 9
204
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
205
- - lr_scheduler_type: cosine
206
- - lr_scheduler_warmup_steps: 10
207
- - num_epochs: 2
208
 
209
- ### Training results
210
 
211
- | Training Loss | Epoch | Step | Validation Loss |
212
- |:-------------:|:-----:|:----:|:---------------:|
213
- | 1.6849 | 0.0 | 1 | 1.7294 |
214
- | 0.6045 | 0.5 | 507 | 0.6127 |
215
- | 0.5986 | 1.0 | 1014 | 0.5868 |
216
- | 0.5136 | 1.48 | 1521 | 0.5786 |
217
 
 
 
 
 
 
 
 
 
 
218
 
219
- ### Framework versions
220
 
221
- - Transformers 4.40.0.dev0
222
- - Pytorch 2.1.2+cu118
223
- - Datasets 2.18.0
224
- - Tokenizers 0.15.0
 
1
  ---
2
+ language:
3
+ - en
4
  license: other
 
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
+ - instruct
9
+ - finetune
10
+ - chatml
11
+ - gpt4
12
+ - synthetic data
13
+ - science
14
+ - physics
15
+ - chemistry
16
+ - biology
17
+ - math
18
+ - llama
19
+ - llama3
20
+ base_model: meta-llama/Meta-Llama-3-8B
21
+ datasets:
22
+ - allenai/ai2_arc
23
+ - camel-ai/physics
24
+ - camel-ai/chemistry
25
+ - camel-ai/biology
26
+ - camel-ai/math
27
+ - metaeval/reclor
28
+ - openbookqa
29
+ - mandyyyyii/scibench
30
+ - derek-thomas/ScienceQA
31
+ - TIGER-Lab/ScienceEval
32
+ - jondurbin/airoboros-3.2
33
+ - LDJnr/Capybara
34
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
35
+ - STEM-AI-mtl/Electrical-engineering
36
+ - knowrohit07/saraswati-stem
37
+ - sablo/oasst2_curated
38
+ - lmsys/lmsys-chat-1m
39
+ - TIGER-Lab/MathInstruct
40
+ - bigbio/med_qa
41
+ - meta-math/MetaMathQA-40K
42
+ - openbookqa
43
+ - piqa
44
+ - metaeval/reclor
45
+ - derek-thomas/ScienceQA
46
+ - scibench
47
+ - sciq
48
+ - Open-Orca/SlimOrca
49
+ - migtissera/Synthia-v1.3
50
+ - TIGER-Lab/ScienceEval
51
+ - allenai/WildChat
52
+ - microsoft/orca-math-word-problems-200k
53
+ - openchat/openchat_sharegpt4_dataset
54
+ - teknium/GPTeacher-General-Instruct
55
+ - m-a-p/CodeFeedback-Filtered-Instruction
56
+ - totally-not-an-llm/EverythingLM-data-V3
57
+ - HuggingFaceH4/no_robots
58
+ - OpenAssistant/oasst_top1_2023-08-25
59
+ - WizardLM/WizardLM_evol_instruct_70k
60
  ---
61
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/5s12oq859qLfDkkTNam_C.png)
62
 
63
+ # 🔬 Einstein-v6.1-Llama3-8B
64
+
65
+ This model is a full fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/Einstein-v6.1-Llama3-8) on diverse datasets.
66
+
67
+ This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
68
+
69
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
70
 
 
71
  <details><summary>See axolotl config</summary>
72
 
73
  axolotl version: `0.4.0`
 
219
  pad_token: <|end_of_text|> # changed
220
  tokens:
221
  - "<|im_start|>"
222
+ ```
223
+ </details><br>
224
+
225
+ # 💬 Prompt Template
226
+
227
+ You can use ChatML prompt template while using the model:
228
+
229
+ ### ChatML
230
 
231
+ ```
232
+ <|im_start|>system
233
+ {system}<|im_end|>
234
+ <|im_start|>user
235
+ {user}<|im_end|>
236
+ <|im_start|>assistant
237
+ {asistant}<|im_end|>
238
  ```
239
 
240
+ This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
241
+ `tokenizer.apply_chat_template()` method:
242
+
243
+ ```python
244
+ messages = [
245
+ {"role": "system", "content": "You are helpful AI asistant."},
246
+ {"role": "user", "content": "Hello!"}
247
+ ]
248
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
249
+ model.generate(**gen_input)
250
+ ```
251
+
252
+ # 🔄 Quantizationed versions
253
 
254
+ ## GGUF [@bartowski](https://huggingface.co/bartowski)
255
 
256
+ - https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF
 
 
257
 
258
+ ## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski)
259
 
260
+ - https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2
261
 
262
+ # 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
263
 
 
264
 
265
+ # 🤖 Additional information about training
266
 
267
+ This model is full fine-tuned for 2 epoch.
268
 
269
+ Total number of steps was 2026.
270
 
271
+ <details><summary>Loss graph</summary>
272
 
273
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/Ycs7ZpoqmxFt0u9rybCO1.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
274
 
275
+ </details><br>
276
 
277
+ # 🤝 Acknowledgments
 
 
 
 
 
278
 
279
+ Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
280
+
281
+ Thanks to all the dataset authors mentioned in the datasets section.
282
+
283
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
284
+
285
+ Thanks to all open source AI community.
286
+
287
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
288
 
289
+ If you would like to support me:
290
 
291
+ [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)