metadata
library_name: transformers
tags:
- generated_from_trainer
datasets:
- kanishka/babylm2-rewritten-clean-spacy
metrics:
- accuracy
model-index:
- name: opt-babylm2-rewritten-clean-spacy-32k-earlystop_seed-42_3e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/babylm2-rewritten-clean-spacy
type: kanishka/babylm2-rewritten-clean-spacy
metrics:
- name: Accuracy
type: accuracy
value: 0.4242773576186558
opt-babylm2-rewritten-clean-spacy-32k-earlystop_seed-42_3e-4
This model was trained from scratch on the kanishka/babylm2-rewritten-clean-spacy dataset. It achieves the following results on the evaluation set:
- Loss: 2.9642
- Accuracy: 0.4243
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
6.8783 | 0.9996 | 1931 | 4.4654 | 0.2891 |
4.2263 | 1.9997 | 3863 | 3.9179 | 0.3337 |
3.7724 | 2.9999 | 5795 | 3.6397 | 0.3562 |
3.5091 | 4.0 | 7727 | 3.4569 | 0.3724 |
3.3306 | 4.9996 | 9658 | 3.3310 | 0.3838 |
3.2012 | 5.9997 | 11590 | 3.2469 | 0.3918 |
3.1088 | 6.9999 | 13522 | 3.1828 | 0.3982 |
3.0364 | 8.0 | 15454 | 3.1404 | 0.4023 |
2.9837 | 8.9996 | 17385 | 3.1080 | 0.4057 |
2.9377 | 9.9997 | 19317 | 3.0840 | 0.4077 |
2.9019 | 10.9999 | 21249 | 3.0633 | 0.4101 |
2.8713 | 12.0 | 23181 | 3.0505 | 0.4117 |
2.8449 | 12.9996 | 25112 | 3.0376 | 0.4130 |
2.8231 | 13.9997 | 27044 | 3.0270 | 0.4143 |
2.7828 | 14.9999 | 28976 | 3.0222 | 0.4150 |
2.7644 | 16.0 | 30908 | 3.0160 | 0.4156 |
2.7508 | 16.9996 | 32839 | 3.0037 | 0.4175 |
2.7036 | 17.9997 | 34771 | 2.9802 | 0.4205 |
2.6333 | 18.9999 | 36703 | 2.9677 | 0.4231 |
2.557 | 19.9922 | 38620 | 2.9642 | 0.4243 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0