gokuls's picture
End of training
dfcee77
metadata
license: apache-2.0
base_model: google/bert_uncased_L-2_H-768_A-12
tags:
  - generated_from_trainer
datasets:
  - massive
metrics:
  - accuracy
model-index:
  - name: bert_uncased_L-2_H-768_A-12_massive
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: massive
          type: massive
          config: en-US
          split: validation
          args: en-US
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8745696015740285

bert_uncased_L-2_H-768_A-12_massive

This model is a fine-tuned version of google/bert_uncased_L-2_H-768_A-12 on the massive dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5434
  • Accuracy: 0.8746

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 33
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.5143 1.0 180 1.2564 0.7024
1.0135 2.0 360 0.7279 0.8205
0.6173 3.0 540 0.5817 0.8559
0.433 4.0 720 0.5234 0.8598
0.312 5.0 900 0.5019 0.8657
0.23 6.0 1080 0.5028 0.8711
0.1742 7.0 1260 0.5037 0.8682
0.1314 8.0 1440 0.5018 0.8692
0.1031 9.0 1620 0.5188 0.8731
0.081 10.0 1800 0.5231 0.8711
0.0671 11.0 1980 0.5407 0.8716
0.0569 12.0 2160 0.5309 0.8721
0.0466 13.0 2340 0.5463 0.8711
0.0414 14.0 2520 0.5434 0.8746
0.039 15.0 2700 0.5464 0.8721

Framework versions

  • Transformers 4.34.0
  • Pytorch 1.14.0a0+410ce96
  • Datasets 2.14.5
  • Tokenizers 0.14.1