lucyknada commited on
Commit
c7f5124
1 Parent(s): 84d074e

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +499 -0
README.md ADDED
@@ -0,0 +1,499 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
6
+ - Nopm/Opus_WritingStruct
7
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
8
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
9
+ - Gryphe/ChatGPT-4o-Writing-Prompts
10
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
11
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
12
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
13
+ - allura-org/Celeste-1.x-data-mixture
14
+ base_model: Qwen/Qwen2.5-32B
15
+ tags:
16
+ - generated_from_trainer
17
+ model-index:
18
+ - name: EVA-Qwen2.5-32B-SFFT-v0.0
19
+ results: []
20
+ ---
21
+ ### exl2 quant (measurement.json in main branch)
22
+ ---
23
+ ### check revisions for quants
24
+ ---
25
+
26
+
27
+ # EVA Qwen2.5-32B v0.0
28
+
29
+ <p>
30
+ A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
31
+ It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
32
+ </p>
33
+
34
+ <p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
35
+
36
+ <p>
37
+ <p>Prompt format is ChatML.</p><br>
38
+ <h3>Recommended sampler values:</h3>
39
+ <ul>
40
+ <li>Temperature: 1</li>
41
+ <li>Typical-P: 0.9</li>
42
+ <li>Min-P: 0.05</li>
43
+ <li>Top-A: 0.2</li>
44
+ <li>Repetition Penalty: 1.03</li>
45
+ </ul>
46
+
47
+ <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
48
+
49
+ - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
50
+ - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
51
+ </p>
52
+
53
+ <p>
54
+ <br>
55
+ <h3>
56
+ Training data:
57
+ </h3>
58
+ <ul>
59
+ <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
60
+ <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
61
+ <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
62
+ <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
63
+ <li>Synthstruct and SynthRP datasets by Epiculous</li>
64
+ </ul>
65
+ <h3>
66
+ Training time and hardware:
67
+ </h3>
68
+ <ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
69
+ </p>
70
+ <p>Model was trained by Kearm and Auri.</p>
71
+ <h4>Special thanks:</h4><ul>
72
+ <li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
73
+ <li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li>
74
+ <li>and to Allura-org for support and feedback on EVA models.</li></ul>
75
+
76
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
77
+ <details><summary>See axolotl config</summary>
78
+
79
+ axolotl version: `0.4.1`
80
+ ```yaml
81
+ base_model: Qwen/Qwen2.5-32B
82
+
83
+ load_in_8bit: false
84
+ load_in_4bit: false
85
+ strict: false
86
+
87
+ plugins:
88
+ - axolotl.integrations.liger.LigerPlugin
89
+ liger_rope: true
90
+ liger_rms_norm: true
91
+ liger_swiglu: true
92
+ liger_fused_linear_cross_entropy: true
93
+
94
+ # plugins:
95
+ # - axolotl.integrations.spectrum.SpectrumPlugin
96
+
97
+ # spectrum_top_fraction: 0.5
98
+ # # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
99
+ # spectrum_model_name: Qwen/Qwen2.5-32B
100
+
101
+ datasets:
102
+ - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
103
+ type: sharegpt
104
+ - path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl
105
+ type: sharegpt
106
+ - path: datasets/Celeste_Filtered.jsonl
107
+ type: sharegpt
108
+ - path: datasets/Gryphe-S3-5-Charcards-names-2k.jsonl
109
+ type: sharegpt
110
+ - path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl
111
+ type: sharegpt
112
+ - path: datasets/deduped_Gryphe-4o-WP-1k.jsonl
113
+ type: sharegpt
114
+ - path: datasets/deduped_not_samantha_norefusals.jsonl
115
+ type: sharegpt
116
+
117
+ chat_template: chatml
118
+ shuffle_merged_datasets: true
119
+ val_set_size: 0.001
120
+ output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.0
121
+
122
+ sequence_len: 8192
123
+ sample_packing: true
124
+ eval_sample_packing: false
125
+ pad_to_sequence_len: true
126
+
127
+ # adapter: qlora
128
+ # lora_model_dir:
129
+ # lora_r: 64
130
+ # lora_alpha: 64
131
+ # lora_dropout: 0.05
132
+ # lora_target_linear: true
133
+ # peft_use_dora: true
134
+
135
+ unfrozen_parameters:
136
+ - ^lm_head.weight$
137
+ - ^model.embed_tokens.weight$
138
+ # input_layernorm layers
139
+ - model.layers.0.input_layernorm
140
+ - model.layers.1.input_layernorm
141
+ - model.layers.2.input_layernorm
142
+ - model.layers.3.input_layernorm
143
+ - model.layers.4.input_layernorm
144
+ - model.layers.5.input_layernorm
145
+ - model.layers.6.input_layernorm
146
+ - model.layers.7.input_layernorm
147
+ - model.layers.8.input_layernorm
148
+ - model.layers.9.input_layernorm
149
+ - model.layers.10.input_layernorm
150
+ - model.layers.11.input_layernorm
151
+ - model.layers.12.input_layernorm
152
+ - model.layers.13.input_layernorm
153
+ - model.layers.14.input_layernorm
154
+ - model.layers.15.input_layernorm
155
+ - model.layers.16.input_layernorm
156
+ - model.layers.17.input_layernorm
157
+ - model.layers.18.input_layernorm
158
+ - model.layers.19.input_layernorm
159
+ - model.layers.20.input_layernorm
160
+ - model.layers.21.input_layernorm
161
+ - model.layers.22.input_layernorm
162
+ - model.layers.23.input_layernorm
163
+ - model.layers.24.input_layernorm
164
+ - model.layers.25.input_layernorm
165
+ - model.layers.26.input_layernorm
166
+ - model.layers.27.input_layernorm
167
+ - model.layers.28.input_layernorm
168
+ - model.layers.29.input_layernorm
169
+ - model.layers.30.input_layernorm
170
+ - model.layers.31.input_layernorm
171
+ # lm_head layers
172
+ # mlp.down_proj layers
173
+ - model.layers.63.mlp.down_proj
174
+ - model.layers.49.mlp.down_proj
175
+ - model.layers.48.mlp.down_proj
176
+ - model.layers.45.mlp.down_proj
177
+ - model.layers.44.mlp.down_proj
178
+ - model.layers.47.mlp.down_proj
179
+ - model.layers.46.mlp.down_proj
180
+ - model.layers.43.mlp.down_proj
181
+ - model.layers.8.mlp.down_proj
182
+ - model.layers.11.mlp.down_proj
183
+ - model.layers.19.mlp.down_proj
184
+ - model.layers.35.mlp.down_proj
185
+ - model.layers.20.mlp.down_proj
186
+ - model.layers.52.mlp.down_proj
187
+ - model.layers.39.mlp.down_proj
188
+ - model.layers.62.mlp.down_proj
189
+ - model.layers.50.mlp.down_proj
190
+ - model.layers.29.mlp.down_proj
191
+ - model.layers.16.mlp.down_proj
192
+ - model.layers.28.mlp.down_proj
193
+ - model.layers.53.mlp.down_proj
194
+ - model.layers.30.mlp.down_proj
195
+ - model.layers.31.mlp.down_proj
196
+ - model.layers.32.mlp.down_proj
197
+ - model.layers.7.mlp.down_proj
198
+ - model.layers.36.mlp.down_proj
199
+ - model.layers.12.mlp.down_proj
200
+ - model.layers.18.mlp.down_proj
201
+ - model.layers.37.mlp.down_proj
202
+ - model.layers.38.mlp.down_proj
203
+ - model.layers.14.mlp.down_proj
204
+ - model.layers.13.mlp.down_proj
205
+ # mlp.gate_proj layers
206
+ - model.layers.43.mlp.gate_proj
207
+ - model.layers.61.mlp.gate_proj
208
+ - model.layers.60.mlp.gate_proj
209
+ - model.layers.44.mlp.gate_proj
210
+ - model.layers.62.mlp.gate_proj
211
+ - model.layers.28.mlp.gate_proj
212
+ - model.layers.29.mlp.gate_proj
213
+ - model.layers.45.mlp.gate_proj
214
+ - model.layers.37.mlp.gate_proj
215
+ - model.layers.35.mlp.gate_proj
216
+ - model.layers.59.mlp.gate_proj
217
+ - model.layers.36.mlp.gate_proj
218
+ - model.layers.30.mlp.gate_proj
219
+ - model.layers.48.mlp.gate_proj
220
+ - model.layers.38.mlp.gate_proj
221
+ - model.layers.27.mlp.gate_proj
222
+ - model.layers.31.mlp.gate_proj
223
+ - model.layers.39.mlp.gate_proj
224
+ - model.layers.34.mlp.gate_proj
225
+ - model.layers.58.mlp.gate_proj
226
+ - model.layers.33.mlp.gate_proj
227
+ - model.layers.26.mlp.gate_proj
228
+ - model.layers.32.mlp.gate_proj
229
+ - model.layers.46.mlp.gate_proj
230
+ - model.layers.42.mlp.gate_proj
231
+ - model.layers.49.mlp.gate_proj
232
+ - model.layers.57.mlp.gate_proj
233
+ - model.layers.50.mlp.gate_proj
234
+ - model.layers.47.mlp.gate_proj
235
+ - model.layers.56.mlp.gate_proj
236
+ - model.layers.63.mlp.gate_proj
237
+ - model.layers.55.mlp.gate_proj
238
+ # mlp.up_proj layers
239
+ - model.layers.61.mlp.up_proj
240
+ - model.layers.60.mlp.up_proj
241
+ - model.layers.32.mlp.up_proj
242
+ - model.layers.59.mlp.up_proj
243
+ - model.layers.58.mlp.up_proj
244
+ - model.layers.57.mlp.up_proj
245
+ - model.layers.44.mlp.up_proj
246
+ - model.layers.28.mlp.up_proj
247
+ - model.layers.35.mlp.up_proj
248
+ - model.layers.36.mlp.up_proj
249
+ - model.layers.31.mlp.up_proj
250
+ - model.layers.34.mlp.up_proj
251
+ - model.layers.55.mlp.up_proj
252
+ - model.layers.29.mlp.up_proj
253
+ - model.layers.49.mlp.up_proj
254
+ - model.layers.30.mlp.up_proj
255
+ - model.layers.53.mlp.up_proj
256
+ - model.layers.43.mlp.up_proj
257
+ - model.layers.56.mlp.up_proj
258
+ - model.layers.33.mlp.up_proj
259
+ - model.layers.54.mlp.up_proj
260
+ - model.layers.62.mlp.up_proj
261
+ - model.layers.27.mlp.up_proj
262
+ - model.layers.51.mlp.up_proj
263
+ - model.layers.52.mlp.up_proj
264
+ - model.layers.37.mlp.up_proj
265
+ - model.layers.45.mlp.up_proj
266
+ - model.layers.26.mlp.up_proj
267
+ - model.layers.42.mlp.up_proj
268
+ - model.layers.50.mlp.up_proj
269
+ - model.layers.48.mlp.up_proj
270
+ - model.layers.39.mlp.up_proj
271
+ # model.embed_tokens layers
272
+ # model.norm layers
273
+ # post_attention_layernorm layers
274
+ - model.layers.0.post_attention_layernorm
275
+ - model.layers.1.post_attention_layernorm
276
+ - model.layers.2.post_attention_layernorm
277
+ - model.layers.3.post_attention_layernorm
278
+ - model.layers.4.post_attention_layernorm
279
+ - model.layers.5.post_attention_layernorm
280
+ - model.layers.6.post_attention_layernorm
281
+ - model.layers.7.post_attention_layernorm
282
+ - model.layers.8.post_attention_layernorm
283
+ - model.layers.9.post_attention_layernorm
284
+ - model.layers.10.post_attention_layernorm
285
+ - model.layers.11.post_attention_layernorm
286
+ - model.layers.12.post_attention_layernorm
287
+ - model.layers.13.post_attention_layernorm
288
+ - model.layers.14.post_attention_layernorm
289
+ - model.layers.15.post_attention_layernorm
290
+ - model.layers.16.post_attention_layernorm
291
+ - model.layers.17.post_attention_layernorm
292
+ - model.layers.18.post_attention_layernorm
293
+ - model.layers.19.post_attention_layernorm
294
+ - model.layers.20.post_attention_layernorm
295
+ - model.layers.21.post_attention_layernorm
296
+ - model.layers.22.post_attention_layernorm
297
+ - model.layers.23.post_attention_layernorm
298
+ - model.layers.24.post_attention_layernorm
299
+ - model.layers.25.post_attention_layernorm
300
+ - model.layers.26.post_attention_layernorm
301
+ - model.layers.27.post_attention_layernorm
302
+ - model.layers.28.post_attention_layernorm
303
+ - model.layers.29.post_attention_layernorm
304
+ - model.layers.30.post_attention_layernorm
305
+ - model.layers.31.post_attention_layernorm
306
+ # self_attn.k_proj layers
307
+ - model.layers.63.self_attn.k_proj
308
+ - model.layers.55.self_attn.k_proj
309
+ - model.layers.60.self_attn.k_proj
310
+ - model.layers.7.self_attn.k_proj
311
+ - model.layers.12.self_attn.k_proj
312
+ - model.layers.13.self_attn.k_proj
313
+ - model.layers.57.self_attn.k_proj
314
+ - model.layers.29.self_attn.k_proj
315
+ - model.layers.14.self_attn.k_proj
316
+ - model.layers.51.self_attn.k_proj
317
+ - model.layers.53.self_attn.k_proj
318
+ - model.layers.54.self_attn.k_proj
319
+ - model.layers.22.self_attn.k_proj
320
+ - model.layers.61.self_attn.k_proj
321
+ - model.layers.18.self_attn.k_proj
322
+ - model.layers.30.self_attn.k_proj
323
+ - model.layers.9.self_attn.k_proj
324
+ - model.layers.24.self_attn.k_proj
325
+ - model.layers.23.self_attn.k_proj
326
+ - model.layers.25.self_attn.k_proj
327
+ - model.layers.10.self_attn.k_proj
328
+ - model.layers.58.self_attn.k_proj
329
+ - model.layers.56.self_attn.k_proj
330
+ - model.layers.15.self_attn.k_proj
331
+ - model.layers.32.self_attn.k_proj
332
+ - model.layers.28.self_attn.k_proj
333
+ - model.layers.8.self_attn.k_proj
334
+ - model.layers.59.self_attn.k_proj
335
+ - model.layers.11.self_attn.k_proj
336
+ - model.layers.48.self_attn.k_proj
337
+ - model.layers.16.self_attn.k_proj
338
+ - model.layers.50.self_attn.k_proj
339
+ # self_attn.o_proj layers
340
+ - model.layers.15.self_attn.o_proj
341
+ - model.layers.23.self_attn.o_proj
342
+ - model.layers.31.self_attn.o_proj
343
+ - model.layers.30.self_attn.o_proj
344
+ - model.layers.18.self_attn.o_proj
345
+ - model.layers.24.self_attn.o_proj
346
+ - model.layers.17.self_attn.o_proj
347
+ - model.layers.28.self_attn.o_proj
348
+ - model.layers.34.self_attn.o_proj
349
+ - model.layers.33.self_attn.o_proj
350
+ - model.layers.25.self_attn.o_proj
351
+ - model.layers.12.self_attn.o_proj
352
+ - model.layers.14.self_attn.o_proj
353
+ - model.layers.29.self_attn.o_proj
354
+ - model.layers.16.self_attn.o_proj
355
+ - model.layers.26.self_attn.o_proj
356
+ - model.layers.22.self_attn.o_proj
357
+ - model.layers.27.self_attn.o_proj
358
+ - model.layers.35.self_attn.o_proj
359
+ - model.layers.20.self_attn.o_proj
360
+ - model.layers.13.self_attn.o_proj
361
+ - model.layers.36.self_attn.o_proj
362
+ - model.layers.19.self_attn.o_proj
363
+ - model.layers.37.self_attn.o_proj
364
+ - model.layers.21.self_attn.o_proj
365
+ - model.layers.11.self_attn.o_proj
366
+ - model.layers.54.self_attn.o_proj
367
+ - model.layers.5.self_attn.o_proj
368
+ - model.layers.38.self_attn.o_proj
369
+ - model.layers.6.self_attn.o_proj
370
+ - model.layers.8.self_attn.o_proj
371
+ - model.layers.9.self_attn.o_proj
372
+ # self_attn.q_proj layers
373
+ - model.layers.1.self_attn.q_proj
374
+ - model.layers.2.self_attn.q_proj
375
+ - model.layers.3.self_attn.q_proj
376
+ - model.layers.45.self_attn.q_proj
377
+ - model.layers.54.self_attn.q_proj
378
+ - model.layers.35.self_attn.q_proj
379
+ - model.layers.48.self_attn.q_proj
380
+ - model.layers.61.self_attn.q_proj
381
+ - model.layers.52.self_attn.q_proj
382
+ - model.layers.50.self_attn.q_proj
383
+ - model.layers.60.self_attn.q_proj
384
+ - model.layers.56.self_attn.q_proj
385
+ - model.layers.58.self_attn.q_proj
386
+ - model.layers.42.self_attn.q_proj
387
+ - model.layers.59.self_attn.q_proj
388
+ - model.layers.44.self_attn.q_proj
389
+ - model.layers.55.self_attn.q_proj
390
+ - model.layers.57.self_attn.q_proj
391
+ - model.layers.41.self_attn.q_proj
392
+ - model.layers.36.self_attn.q_proj
393
+ - model.layers.39.self_attn.q_proj
394
+ - model.layers.4.self_attn.q_proj
395
+ - model.layers.43.self_attn.q_proj
396
+ - model.layers.34.self_attn.q_proj
397
+ - model.layers.46.self_attn.q_proj
398
+ - model.layers.49.self_attn.q_proj
399
+ - model.layers.40.self_attn.q_proj
400
+ - model.layers.25.self_attn.q_proj
401
+ - model.layers.51.self_attn.q_proj
402
+ - model.layers.17.self_attn.q_proj
403
+ - model.layers.37.self_attn.q_proj
404
+ - model.layers.53.self_attn.q_proj
405
+ # self_attn.v_proj layers
406
+ - model.layers.55.self_attn.v_proj
407
+ - model.layers.31.self_attn.v_proj
408
+ - model.layers.47.self_attn.v_proj
409
+ - model.layers.45.self_attn.v_proj
410
+ - model.layers.49.self_attn.v_proj
411
+ - model.layers.48.self_attn.v_proj
412
+ - model.layers.15.self_attn.v_proj
413
+ - model.layers.30.self_attn.v_proj
414
+ - model.layers.7.self_attn.v_proj
415
+ - model.layers.44.self_attn.v_proj
416
+ - model.layers.29.self_attn.v_proj
417
+ - model.layers.51.self_attn.v_proj
418
+ - model.layers.50.self_attn.v_proj
419
+ - model.layers.14.self_attn.v_proj
420
+ - model.layers.54.self_attn.v_proj
421
+ - model.layers.32.self_attn.v_proj
422
+ - model.layers.43.self_attn.v_proj
423
+ - model.layers.10.self_attn.v_proj
424
+ - model.layers.46.self_attn.v_proj
425
+ - model.layers.38.self_attn.v_proj
426
+ - model.layers.57.self_attn.v_proj
427
+ - model.layers.22.self_attn.v_proj
428
+ - model.layers.39.self_attn.v_proj
429
+ - model.layers.6.self_attn.v_proj
430
+ - model.layers.23.self_attn.v_proj
431
+ - model.layers.58.self_attn.v_proj
432
+ - model.layers.53.self_attn.v_proj
433
+ - model.layers.40.self_attn.v_proj
434
+ - model.layers.24.self_attn.v_proj
435
+ - model.layers.9.self_attn.v_proj
436
+ - model.layers.25.self_attn.v_proj
437
+ - model.layers.5.self_attn.v_proj
438
+
439
+
440
+ wandb_project: EVA-Qwen2.5-32B-SFFT-v0.0
441
+ wandb_entity:
442
+ wandb_watch:
443
+ wandb_name: Unit-00
444
+ wandb_log_model:
445
+
446
+ gradient_accumulation_steps: 8
447
+ micro_batch_size: 1
448
+ num_epochs: 3
449
+ optimizer: paged_adamw_8bit
450
+ lr_scheduler: cosine
451
+ learning_rate: 0.00003
452
+ max_grad_norm: 3
453
+
454
+ train_on_inputs: false
455
+ group_by_length: false
456
+ bf16: auto
457
+ fp16:
458
+ tf32: true
459
+
460
+ gradient_checkpointing: "unsloth"
461
+ # gradient_checkpointing_kwargs:
462
+ # use_reentrant: true
463
+ early_stopping_patience:
464
+ resume_from_checkpoint:
465
+ local_rank:
466
+ logging_steps: 1
467
+ xformers_attention:
468
+ flash_attention: true
469
+
470
+ warmup_steps: 20
471
+ evals_per_epoch: 4
472
+ saves_per_epoch: 2
473
+ save_safetensors: true
474
+ hub_model_id:
475
+ hub_strategy:
476
+ debug:
477
+ deepspeed: deepspeed_configs/zero3_bf16.json
478
+ weight_decay: 0.1
479
+ # fsdp:
480
+ # - full_shard
481
+ # - auto_wrap
482
+ # fsdp_config:
483
+ # fsdp_limit_all_gathers: true
484
+ # fsdp_sync_module_states: true
485
+ # fsdp_offload_params: false # Changed from true
486
+ # fsdp_use_orig_params: true # Changed from false
487
+ # fsdp_cpu_ram_efficient_loading: true
488
+ # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
489
+ # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
490
+ # fsdp_activation_checkpointing: true
491
+ # fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
492
+ # fsdp_sharding_strategy: FULL_SHARD
493
+ # fsdp_forward_prefetch: true # Added
494
+ # fsdp_backward_prefetch: "BACKWARD_POST" # Added
495
+ # fsdp_backward_prefetch_limit: 1 # Added
496
+ # fsdp_mixed_precision: BF16 # Added
497
+ ```
498
+
499
+ </details><br>