Fizzarolli
commited on
Commit
•
277af36
1
Parent(s):
dcfa5f5
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,6 @@ a continually pretrained phi3-mini sparse moe upcycle
|
|
18 |
*not trained on instruct data.* it's pretty likely that it won't be much different from phi 3 if you use it like that, if not worse due to any forgetting of instruct formats during the continued training.
|
19 |
|
20 |
## future experiments
|
21 |
-
- the datasets for this were literally chosen on a whim. perhaps experiment with a
|
22 |
- actually freeze the gate layers next time (see [Chen et. al, 2023](https://arxiv.org/abs/2303.01610)), oops
|
23 |
- MOAR TRAINING, this only went up to ~0.2 of an epoch because i ran out of dolar
|
|
|
18 |
*not trained on instruct data.* it's pretty likely that it won't be much different from phi 3 if you use it like that, if not worse due to any forgetting of instruct formats during the continued training.
|
19 |
|
20 |
## future experiments
|
21 |
+
- the datasets for this were literally chosen on a whim. perhaps experiment with a further filtered [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)?
|
22 |
- actually freeze the gate layers next time (see [Chen et. al, 2023](https://arxiv.org/abs/2303.01610)), oops
|
23 |
- MOAR TRAINING, this only went up to ~0.2 of an epoch because i ran out of dolar
|