longformer-spans / meta_data /README_s42_e5.md
Theoreticallyhugo's picture
Training in progress, epoch 1
49c7a0d verified
|
raw
history blame
6.67 kB
metadata
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-spans
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: spans
          split: train[80%:100%]
          args: spans
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.935805061732865

longformer-spans

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1821
  • B: {'precision': 0.8143972246313964, 'recall': 0.900287631831256, 'f1-score': 0.8551912568306012, 'support': 1043.0}
  • I: {'precision': 0.9392924896774913, 'recall': 0.9702593659942363, 'f1-score': 0.9545248355636199, 'support': 17350.0}
  • O: {'precision': 0.944873595505618, 'recall': 0.8750270973336224, 'f1-score': 0.9086100168823861, 'support': 9226.0}
  • Accuracy: 0.9358
  • Macro avg: {'precision': 0.8995211032715019, 'recall': 0.9151913650530382, 'f1-score': 0.9061087030922024, 'support': 27619.0}
  • Weighted avg: {'precision': 0.936440305345228, 'recall': 0.935805061732865, 'f1-score': 0.9354359822462803, 'support': 27619.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss B I O Accuracy Macro avg Weighted avg
No log 1.0 41 0.3420 {'precision': 0.7641196013289037, 'recall': 0.22051773729626079, 'f1-score': 0.3422619047619047, 'support': 1043.0} {'precision': 0.8498853325356466, 'recall': 0.9825360230547551, 'f1-score': 0.9114093242087253, 'support': 17350.0} {'precision': 0.9462809917355371, 'recall': 0.7446347279427704, 'f1-score': 0.8334344292126653, 'support': 9226.0} 0.8743 {'precision': 0.8534286418666958, 'recall': 0.6492294960979287, 'f1-score': 0.6957018860610984, 'support': 27619.0} {'precision': 0.8788470144984097, 'recall': 0.8742894384300662, 'f1-score': 0.8638689664942287, 'support': 27619.0}
No log 2.0 82 0.2028 {'precision': 0.7734241908006815, 'recall': 0.8705656759348035, 'f1-score': 0.8191249436175011, 'support': 1043.0} {'precision': 0.9413330313154765, 'recall': 0.9580979827089338, 'f1-score': 0.9496415207518066, 'support': 17350.0} {'precision': 0.9263601183701343, 'recall': 0.8821807934099285, 'f1-score': 0.9037308461025984, 'support': 9226.0} 0.9294 {'precision': 0.8803724468287641, 'recall': 0.903614817351222, 'f1-score': 0.8908324368239686, 'support': 27619.0} {'precision': 0.9299905129226795, 'recall': 0.9294326369528223, 'f1-score': 0.9293764613990178, 'support': 27619.0}
No log 3.0 123 0.2004 {'precision': 0.7942905121746432, 'recall': 0.9069990412272292, 'f1-score': 0.8469113697403761, 'support': 1043.0} {'precision': 0.9219560115701577, 'recall': 0.9736599423631124, 'f1-score': 0.9471028508956354, 'support': 17350.0} {'precision': 0.9505243676742752, 'recall': 0.835031432907002, 'f1-score': 0.8890427557555824, 'support': 9226.0} 0.9248 {'precision': 0.8889236304730254, 'recall': 0.9052301388324479, 'f1-score': 0.8943523254638647, 'support': 27619.0} {'precision': 0.9266779977951141, 'recall': 0.9248343531626778, 'f1-score': 0.9239245260972333, 'support': 27619.0}
No log 4.0 164 0.1732 {'precision': 0.8319928507596068, 'recall': 0.8926174496644296, 'f1-score': 0.8612395929694727, 'support': 1043.0} {'precision': 0.9531670965892806, 'recall': 0.9583861671469741, 'f1-score': 0.9557695071130909, 'support': 17350.0} {'precision': 0.9240198785201547, 'recall': 0.9068935616735313, 'f1-score': 0.9153766205349817, 'support': 9226.0} 0.9387 {'precision': 0.9030599419563474, 'recall': 0.9192990594949784, 'f1-score': 0.9107952402058485, 'support': 27619.0} {'precision': 0.9388545953290572, 'recall': 0.9387016184510663, 'f1-score': 0.9387066347418453, 'support': 27619.0}
No log 5.0 205 0.1821 {'precision': 0.8143972246313964, 'recall': 0.900287631831256, 'f1-score': 0.8551912568306012, 'support': 1043.0} {'precision': 0.9392924896774913, 'recall': 0.9702593659942363, 'f1-score': 0.9545248355636199, 'support': 17350.0} {'precision': 0.944873595505618, 'recall': 0.8750270973336224, 'f1-score': 0.9086100168823861, 'support': 9226.0} 0.9358 {'precision': 0.8995211032715019, 'recall': 0.9151913650530382, 'f1-score': 0.9061087030922024, 'support': 27619.0} {'precision': 0.936440305345228, 'recall': 0.935805061732865, 'f1-score': 0.9354359822462803, 'support': 27619.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2