File size: 12,556 Bytes
49c7a0d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- essays_su_g
metrics:
- accuracy
model-index:
- name: longformer-spans
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: essays_su_g
      type: essays_su_g
      config: spans
      split: train[80%:100%]
      args: spans
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9430826604873457
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# longformer-spans

This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2608
- B: {'precision': 0.8667287977632805, 'recall': 0.8916586768935763, 'f1-score': 0.8790170132325141, 'support': 1043.0}
- I: {'precision': 0.9495959767192179, 'recall': 0.9685878962536023, 'f1-score': 0.9589979170827745, 'support': 17350.0}
- O: {'precision': 0.939315176856142, 'recall': 0.9009321482766096, 'f1-score': 0.9197233748271093, 'support': 9226.0}
- Accuracy: 0.9431
- Macro avg: {'precision': 0.9185466504462134, 'recall': 0.9203929071412628, 'f1-score': 0.9192461017141326, 'support': 27619.0}
- Weighted avg: {'precision': 0.9430323383837321, 'recall': 0.9430826604873457, 'f1-score': 0.9428580492538672, 'support': 27619.0}

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14

### Training results

| Training Loss | Epoch | Step | Validation Loss | B                                                                                                                   | I                                                                                                                   | O                                                                                                                  | Accuracy | Macro avg                                                                                                           | Weighted avg                                                                                                        |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| No log        | 1.0   | 41   | 0.2982          | {'precision': 0.8073929961089494, 'recall': 0.39789069990412274, 'f1-score': 0.5330764290301863, 'support': 1043.0} | {'precision': 0.881321540062435, 'recall': 0.9763112391930836, 'f1-score': 0.9263877495214657, 'support': 17350.0}  | {'precision': 0.93570069752695, 'recall': 0.7996965098634294, 'f1-score': 0.8623692361638712, 'support': 9226.0}   | 0.8955   | {'precision': 0.8748050778994448, 'recall': 0.724632816320212, 'f1-score': 0.773944471571841, 'support': 27619.0}   | {'precision': 0.8966948206093096, 'recall': 0.8954705094319129, 'f1-score': 0.8901497064529414, 'support': 27619.0} |
| No log        | 2.0   | 82   | 0.2000          | {'precision': 0.7943201376936316, 'recall': 0.8849472674976031, 'f1-score': 0.83718820861678, 'support': 1043.0}    | {'precision': 0.9348308374930671, 'recall': 0.9714697406340058, 'f1-score': 0.9527981910684004, 'support': 17350.0} | {'precision': 0.9481428740951703, 'recall': 0.866030782570995, 'f1-score': 0.9052285730470742, 'support': 9226.0}  | 0.9330   | {'precision': 0.8924312830939564, 'recall': 0.9074825969008679, 'f1-score': 0.8984049909107515, 'support': 27619.0} | {'precision': 0.9339714359868645, 'recall': 0.9329809189326188, 'f1-score': 0.9325418998354885, 'support': 27619.0} |
| No log        | 3.0   | 123  | 0.1755          | {'precision': 0.8634192932187201, 'recall': 0.8667305848513902, 'f1-score': 0.8650717703349282, 'support': 1043.0}  | {'precision': 0.9657913330193123, 'recall': 0.9454178674351585, 'f1-score': 0.9554960097862178, 'support': 17350.0} | {'precision': 0.9005006257822278, 'recall': 0.9358335139822241, 'f1-score': 0.9178271499946848, 'support': 9226.0} | 0.9392   | {'precision': 0.9099037506734201, 'recall': 0.9159939887562576, 'f1-score': 0.9127983100386103, 'support': 27619.0} | {'precision': 0.9401153091777049, 'recall': 0.939244722835729, 'f1-score': 0.9394981321590635, 'support': 27619.0}  |
| No log        | 4.0   | 164  | 0.1881          | {'precision': 0.8497164461247637, 'recall': 0.8619367209971237, 'f1-score': 0.8557829604950024, 'support': 1043.0}  | {'precision': 0.9385960028948394, 'recall': 0.9717579250720461, 'f1-score': 0.9548891343131425, 'support': 17350.0} | {'precision': 0.9423121656199116, 'recall': 0.8781703880338174, 'f1-score': 0.9091113105924596, 'support': 9226.0} | 0.9363   | {'precision': 0.9102082048798383, 'recall': 0.9039550113676623, 'f1-score': 0.9065944684668681, 'support': 27619.0} | {'precision': 0.9364809349919582, 'recall': 0.9363481661175278, 'f1-score': 0.9358546312196439, 'support': 27619.0} |
| No log        | 5.0   | 205  | 0.2061          | {'precision': 0.8460846084608461, 'recall': 0.9012464046021093, 'f1-score': 0.872794800371402, 'support': 1043.0}   | {'precision': 0.9360735652559273, 'recall': 0.9739481268011527, 'f1-score': 0.9546353313372126, 'support': 17350.0} | {'precision': 0.9493850520340587, 'recall': 0.8701495772815955, 'f1-score': 0.9080420766881574, 'support': 9226.0} | 0.9365   | {'precision': 0.9105144085836107, 'recall': 0.9151147028949524, 'f1-score': 0.9118240694655907, 'support': 27619.0} | {'precision': 0.9371218760230721, 'recall': 0.9365292009124153, 'f1-score': 0.9359804545788388, 'support': 27619.0} |
| No log        | 6.0   | 246  | 0.1957          | {'precision': 0.844061650045331, 'recall': 0.8926174496644296, 'f1-score': 0.8676607642124884, 'support': 1043.0}   | {'precision': 0.9341138786104821, 'recall': 0.9748703170028818, 'f1-score': 0.9540570268212201, 'support': 17350.0} | {'precision': 0.9499345938875015, 'recall': 0.865814003902016, 'f1-score': 0.9059257159058689, 'support': 9226.0}  | 0.9353   | {'precision': 0.9093700408477714, 'recall': 0.9111005901897758, 'f1-score': 0.9092145023131923, 'support': 27619.0} | {'precision': 0.9359979962379243, 'recall': 0.9353343712661574, 'f1-score': 0.9347163274329027, 'support': 27619.0} |
| No log        | 7.0   | 287  | 0.1979          | {'precision': 0.8534562211981567, 'recall': 0.887823585810163, 'f1-score': 0.8703007518796992, 'support': 1043.0}   | {'precision': 0.9429021904386509, 'recall': 0.9651296829971182, 'f1-score': 0.9538864678572446, 'support': 17350.0} | {'precision': 0.9317378917378918, 'recall': 0.8861911987860395, 'f1-score': 0.908393978112327, 'support': 9226.0}  | 0.9358   | {'precision': 0.9093654344582331, 'recall': 0.9130481558644402, 'f1-score': 0.9108603992830903, 'support': 27619.0} | {'precision': 0.9357949828738935, 'recall': 0.9358412686918426, 'f1-score': 0.9355333916361219, 'support': 27619.0} |
| No log        | 8.0   | 328  | 0.2078          | {'precision': 0.8595194085027726, 'recall': 0.8916586768935763, 'f1-score': 0.8752941176470588, 'support': 1043.0}  | {'precision': 0.9472519993193806, 'recall': 0.9625936599423631, 'f1-score': 0.9548612103713445, 'support': 17350.0} | {'precision': 0.9278014821468673, 'recall': 0.8956210708866248, 'f1-score': 0.9114273108316787, 'support': 9226.0} | 0.9375   | {'precision': 0.9115242966563403, 'recall': 0.9166244692408547, 'f1-score': 0.9138608796166939, 'support': 27619.0} | {'precision': 0.9374415223413826, 'recall': 0.9375429957637857, 'f1-score': 0.9373475554647807, 'support': 27619.0} |
| No log        | 9.0   | 369  | 0.2136          | {'precision': 0.8539944903581267, 'recall': 0.8916586768935763, 'f1-score': 0.8724202626641651, 'support': 1043.0}  | {'precision': 0.9485901936360548, 'recall': 0.9656484149855907, 'f1-score': 0.9570432994401918, 'support': 17350.0} | {'precision': 0.934596301308074, 'recall': 0.8983308042488619, 'f1-score': 0.9161047861169449, 'support': 9226.0}  | 0.9404   | {'precision': 0.9123936617674184, 'recall': 0.9185459653760096, 'f1-score': 0.9151894494071007, 'support': 27619.0} | {'precision': 0.9403432995002486, 'recall': 0.940367138564032, 'f1-score': 0.9401722848749406, 'support': 27619.0}  |
| No log        | 10.0  | 410  | 0.2702          | {'precision': 0.8539944903581267, 'recall': 0.8916586768935763, 'f1-score': 0.8724202626641651, 'support': 1043.0}  | {'precision': 0.9347357959251283, 'recall': 0.9757348703170029, 'f1-score': 0.9547954090409182, 'support': 17350.0} | {'precision': 0.9510630716237083, 'recall': 0.8678734012573163, 'f1-score': 0.9075658826863134, 'support': 9226.0} | 0.9365   | {'precision': 0.9132644526356545, 'recall': 0.9117556494892985, 'f1-score': 0.9115938514637989, 'support': 27619.0} | {'precision': 0.9371407441089408, 'recall': 0.9365292009124153, 'f1-score': 0.9359077995033341, 'support': 27619.0} |
| No log        | 11.0  | 451  | 0.2582          | {'precision': 0.852157943067034, 'recall': 0.8897411313518696, 'f1-score': 0.8705440900562851, 'support': 1043.0}   | {'precision': 0.9372609876406363, 'recall': 0.9746974063400576, 'f1-score': 0.9556126917752098, 'support': 17350.0} | {'precision': 0.9494521032166844, 'recall': 0.87340125731628, 'f1-score': 0.9098402303392988, 'support': 9226.0}   | 0.9377   | {'precision': 0.9129570113081181, 'recall': 0.9126132650027358, 'f1-score': 0.9119990040569311, 'support': 27619.0} | {'precision': 0.9381195544538573, 'recall': 0.9376516166407184, 'f1-score': 0.9371100928107088, 'support': 27619.0} |
| No log        | 12.0  | 492  | 0.2540          | {'precision': 0.8534798534798534, 'recall': 0.8935762224352828, 'f1-score': 0.8730679156908665, 'support': 1043.0}  | {'precision': 0.944104607441495, 'recall': 0.9696253602305476, 'f1-score': 0.9566948164576758, 'support': 17350.0}  | {'precision': 0.9408589802480478, 'recall': 0.8880338174723608, 'f1-score': 0.9136835061893611, 'support': 9226.0} | 0.9395   | {'precision': 0.9128144803897987, 'recall': 0.9170784667127304, 'f1-score': 0.9144820794459679, 'support': 27619.0} | {'precision': 0.9395980802367181, 'recall': 0.9394981715485716, 'f1-score': 0.9391690115394944, 'support': 27619.0} |
| 0.1251        | 13.0  | 533  | 0.2575          | {'precision': 0.8613406795224977, 'recall': 0.8993288590604027, 'f1-score': 0.8799249530956847, 'support': 1043.0}  | {'precision': 0.9515141204491323, 'recall': 0.9670893371757925, 'f1-score': 0.9592385090327007, 'support': 17350.0} | {'precision': 0.9372751798561151, 'recall': 0.9037502709733363, 'f1-score': 0.9202074826178127, 'support': 9226.0} | 0.9434   | {'precision': 0.916709993275915, 'recall': 0.9233894890698439, 'f1-score': 0.9197903149153994, 'support': 27619.0}  | {'precision': 0.9433523707551661, 'recall': 0.9433723161591658, 'f1-score': 0.9432051881830659, 'support': 27619.0} |
| 0.1251        | 14.0  | 574  | 0.2608          | {'precision': 0.8667287977632805, 'recall': 0.8916586768935763, 'f1-score': 0.8790170132325141, 'support': 1043.0}  | {'precision': 0.9495959767192179, 'recall': 0.9685878962536023, 'f1-score': 0.9589979170827745, 'support': 17350.0} | {'precision': 0.939315176856142, 'recall': 0.9009321482766096, 'f1-score': 0.9197233748271093, 'support': 9226.0}  | 0.9431   | {'precision': 0.9185466504462134, 'recall': 0.9203929071412628, 'f1-score': 0.9192461017141326, 'support': 27619.0} | {'precision': 0.9430323383837321, 'recall': 0.9430826604873457, 'f1-score': 0.9428580492538672, 'support': 27619.0} |


### Framework versions

- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2