Theoreticallyhugo
commited on
Commit
•
49c7a0d
1
Parent(s):
0b03303
Training in progress, epoch 1
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +27 -16
- meta_data/README_s42_e10.md +89 -0
- meta_data/README_s42_e11.md +90 -0
- meta_data/README_s42_e12.md +91 -0
- meta_data/README_s42_e13.md +92 -0
- meta_data/README_s42_e14.md +93 -0
- meta_data/README_s42_e15.md +94 -0
- meta_data/README_s42_e4.md +13 -14
- meta_data/README_s42_e5.md +16 -17
- meta_data/README_s42_e6.md +17 -18
- meta_data/README_s42_e7.md +18 -18
- meta_data/README_s42_e8.md +87 -0
- meta_data/README_s42_e9.md +88 -0
- meta_data/meta_s42_e10_cvi0.json +1 -0
- meta_data/meta_s42_e10_cvi1.json +1 -0
- meta_data/meta_s42_e10_cvi2.json +1 -0
- meta_data/meta_s42_e10_cvi3.json +1 -0
- meta_data/meta_s42_e10_cvi4.json +1 -0
- meta_data/meta_s42_e11_cvi0.json +1 -0
- meta_data/meta_s42_e11_cvi1.json +1 -0
- meta_data/meta_s42_e11_cvi2.json +1 -0
- meta_data/meta_s42_e11_cvi3.json +1 -0
- meta_data/meta_s42_e11_cvi4.json +1 -0
- meta_data/meta_s42_e12_cvi0.json +1 -0
- meta_data/meta_s42_e12_cvi1.json +1 -0
- meta_data/meta_s42_e12_cvi2.json +1 -0
- meta_data/meta_s42_e12_cvi3.json +1 -0
- meta_data/meta_s42_e12_cvi4.json +1 -0
- meta_data/meta_s42_e13_cvi0.json +1 -0
- meta_data/meta_s42_e13_cvi1.json +1 -0
- meta_data/meta_s42_e13_cvi2.json +1 -0
- meta_data/meta_s42_e13_cvi3.json +1 -0
- meta_data/meta_s42_e13_cvi4.json +1 -0
- meta_data/meta_s42_e14_cvi0.json +1 -0
- meta_data/meta_s42_e14_cvi1.json +1 -0
- meta_data/meta_s42_e14_cvi2.json +1 -0
- meta_data/meta_s42_e14_cvi3.json +1 -0
- meta_data/meta_s42_e14_cvi4.json +1 -0
- meta_data/meta_s42_e15_cvi0.json +1 -0
- meta_data/meta_s42_e15_cvi1.json +1 -0
- meta_data/meta_s42_e15_cvi2.json +1 -0
- meta_data/meta_s42_e15_cvi3.json +1 -0
- meta_data/meta_s42_e15_cvi4.json +1 -0
- meta_data/meta_s42_e16_cvi0.json +1 -0
- meta_data/meta_s42_e4_cvi0.json +1 -0
- meta_data/meta_s42_e4_cvi1.json +1 -0
- meta_data/meta_s42_e4_cvi2.json +1 -0
- meta_data/meta_s42_e4_cvi3.json +1 -0
- meta_data/meta_s42_e4_cvi4.json +1 -0
- meta_data/meta_s42_e5_cvi0.json +1 -0
README.md
CHANGED
@@ -17,12 +17,12 @@ model-index:
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
-
split:
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,13 +32,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B: {'precision': 0.
|
37 |
-
- I: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Accuracy: 0.
|
40 |
-
- Macro avg: {'precision': 0.
|
41 |
-
- Weighted avg: {'precision': 0.
|
42 |
|
43 |
## Model description
|
44 |
|
@@ -63,16 +63,27 @@ The following hyperparameters were used during training:
|
|
63 |
- seed: 42
|
64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
-
- num_epochs:
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
-
| Training Loss | Epoch | Step | Validation Loss | B
|
71 |
-
|
72 |
-
| No log | 1.0 | 41 | 0.
|
73 |
-
| No log | 2.0 | 82 | 0.
|
74 |
-
| No log | 3.0 | 123 | 0.
|
75 |
-
| No log | 4.0 | 164 | 0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
|
77 |
|
78 |
### Framework versions
|
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
+
value: 0.9412361055794923
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2837
|
36 |
+
- B: {'precision': 0.8616822429906542, 'recall': 0.8839884947267498, 'f1-score': 0.872692853762423, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9506446299767138, 'recall': 0.9647262247838617, 'f1-score': 0.957633664216037, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9322299261910088, 'recall': 0.9035334923043572, 'f1-score': 0.9176574196389256, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9412
|
40 |
+
- Macro avg: {'precision': 0.9148522663861257, 'recall': 0.9174160706049896, 'f1-score': 0.9159946458724618, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9411337198513156, 'recall': 0.9412361055794923, 'f1-score': 0.9410720907422853, 'support': 27619.0}
|
42 |
|
43 |
## Model description
|
44 |
|
|
|
63 |
- seed: 42
|
64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 15
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2927 | {'precision': 0.8069852941176471, 'recall': 0.42090124640460214, 'f1-score': 0.5532451165721487, 'support': 1043.0} | {'precision': 0.8852390417407678, 'recall': 0.9754466858789625, 'f1-score': 0.9281561917297357, 'support': 17350.0} | {'precision': 0.9349000879728541, 'recall': 0.8063082592672881, 'f1-score': 0.8658557876971424, 'support': 9226.0} | 0.8980 | {'precision': 0.8757081412770896, 'recall': 0.7342187305169509, 'f1-score': 0.7824190319996757, 'support': 27619.0} | {'precision': 0.8988729225389979, 'recall': 0.8980049965603389, 'f1-score': 0.8931869394398603, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.1958 | {'precision': 0.7986171132238548, 'recall': 0.8859060402684564, 'f1-score': 0.84, 'support': 1043.0} | {'precision': 0.9361619307123394, 'recall': 0.9703170028818444, 'f1-score': 0.9529335182407381, 'support': 17350.0} | {'precision': 0.9455124425050124, 'recall': 0.8689572946022112, 'f1-score': 0.905619881389438, 'support': 9226.0} | 0.9333 | {'precision': 0.8934304954804023, 'recall': 0.908393445917504, 'f1-score': 0.8995177998767253, 'support': 27619.0} | {'precision': 0.9340912032116592, 'recall': 0.933270574604439, 'f1-score': 0.932863809956036, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1754 | {'precision': 0.8552631578947368, 'recall': 0.87248322147651, 'f1-score': 0.8637873754152824, 'support': 1043.0} | {'precision': 0.966759166322253, 'recall': 0.9437463976945245, 'f1-score': 0.9551141832181294, 'support': 17350.0} | {'precision': 0.8988355167394468, 'recall': 0.9370257966616085, 'f1-score': 0.9175334323922734, 'support': 9226.0} | 0.9388 | {'precision': 0.9069526136521455, 'recall': 0.9177518052775477, 'f1-score': 0.9121449970085617, 'support': 27619.0} | {'precision': 0.9398590639347346, 'recall': 0.9388102393279989, 'f1-score': 0.9391116535227126, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1844 | {'precision': 0.861003861003861, 'recall': 0.8552253116011506, 'f1-score': 0.8581048581048581, 'support': 1043.0} | {'precision': 0.9428187016481668, 'recall': 0.9693371757925072, 'f1-score': 0.9558940547914061, 'support': 17350.0} | {'precision': 0.9376786735277302, 'recall': 0.8887925428137872, 'f1-score': 0.912581381114017, 'support': 9226.0} | 0.9381 | {'precision': 0.9138337453932527, 'recall': 0.904451676735815, 'f1-score': 0.908860098003427, 'support': 27619.0} | {'precision': 0.9380120548386821, 'recall': 0.9381223071074261, 'f1-score': 0.937732757876541, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2030 | {'precision': 0.8463611859838275, 'recall': 0.9031639501438159, 'f1-score': 0.8738404452690166, 'support': 1043.0} | {'precision': 0.9367116741679169, 'recall': 0.9716426512968299, 'f1-score': 0.9538574702237813, 'support': 17350.0} | {'precision': 0.9452344576330943, 'recall': 0.8717754172989378, 'f1-score': 0.9070200169157033, 'support': 9226.0} | 0.9357 | {'precision': 0.9094357725949461, 'recall': 0.9155273395798611, 'f1-score': 0.9115726441361671, 'support': 27619.0} | {'precision': 0.9361466877844027, 'recall': 0.9356964408559325, 'f1-score': 0.9351898826482664, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1880 | {'precision': 0.8593012275731823, 'recall': 0.87248322147651, 'f1-score': 0.8658420551855375, 'support': 1043.0} | {'precision': 0.9416148372275452, 'recall': 0.9685878962536023, 'f1-score': 0.954910929908799, 'support': 17350.0} | {'precision': 0.9369907035464249, 'recall': 0.8848905267721656, 'f1-score': 0.9101956630804393, 'support': 9226.0} | 0.9370 | {'precision': 0.9126355894490508, 'recall': 0.9086538815007593, 'f1-score': 0.9103162160582586, 'support': 27619.0} | {'precision': 0.9369616871420418, 'recall': 0.9369998913791231, 'f1-score': 0.9366104162010322, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1950 | {'precision': 0.8525345622119815, 'recall': 0.8868648130393096, 'f1-score': 0.8693609022556391, 'support': 1043.0} | {'precision': 0.9470030477480528, 'recall': 0.9670893371757925, 'f1-score': 0.9569408007300102, 'support': 17350.0} | {'precision': 0.9362522686025408, 'recall': 0.8946455668762194, 'f1-score': 0.9149761667220929, 'support': 9226.0} | 0.9399 | {'precision': 0.9119299595208584, 'recall': 0.9161999056971072, 'f1-score': 0.9137592899025807, 'support': 27619.0} | {'precision': 0.9398443048967325, 'recall': 0.9398602411383468, 'f1-score': 0.939615352760648, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2260 | {'precision': 0.8517495395948435, 'recall': 0.8868648130393096, 'f1-score': 0.868952559887271, 'support': 1043.0} | {'precision': 0.933457985041795, 'recall': 0.978328530259366, 'f1-score': 0.955366691056453, 'support': 17350.0} | {'precision': 0.9556833153671098, 'recall': 0.8648384998916107, 'f1-score': 0.9079943100995733, 'support': 9226.0} | 0.9370 | {'precision': 0.9136302800012494, 'recall': 0.9100106143967621, 'f1-score': 0.9107711870144325, 'support': 27619.0} | {'precision': 0.9377966283301177, 'recall': 0.9369636844201455, 'f1-score': 0.9362788339465783, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2217 | {'precision': 0.8499079189686924, 'recall': 0.8849472674976031, 'f1-score': 0.8670737435415689, 'support': 1043.0} | {'precision': 0.9531535648994516, 'recall': 0.9616138328530259, 'f1-score': 0.9573650083204224, 'support': 17350.0} | {'precision': 0.927455975191051, 'recall': 0.9076522870149577, 'f1-score': 0.9174472747192549, 'support': 9226.0} | 0.9407 | {'precision': 0.910172486353065, 'recall': 0.9180711291218623, 'f1-score': 0.9139620088604153, 'support': 27619.0} | {'precision': 0.9406704492415535, 'recall': 0.9406930011948297, 'f1-score': 0.9406209263707241, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2663 | {'precision': 0.8574091332712023, 'recall': 0.8820709491850431, 'f1-score': 0.8695652173913044, 'support': 1043.0} | {'precision': 0.9361054205193511, 'recall': 0.9744668587896254, 'f1-score': 0.9549010194572307, 'support': 17350.0} | {'precision': 0.9483794932233353, 'recall': 0.8722089746368957, 'f1-score': 0.9087008074078257, 'support': 9226.0} | 0.9368 | {'precision': 0.9139646823379629, 'recall': 0.9095822608705214, 'f1-score': 0.9110556814187869, 'support': 27619.0} | {'precision': 0.937233642655096, 'recall': 0.9368188565842355, 'f1-score': 0.9362454418504176, 'support': 27619.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.2752 | {'precision': 0.8570110701107011, 'recall': 0.8906999041227229, 'f1-score': 0.8735307945463094, 'support': 1043.0} | {'precision': 0.9348246340789838, 'recall': 0.9755043227665706, 'f1-score': 0.954731349598082, 'support': 17350.0} | {'precision': 0.9505338078291815, 'recall': 0.8685237372642532, 'f1-score': 0.9076801087449027, 'support': 9226.0} | 0.9366 | {'precision': 0.9141231706729555, 'recall': 0.9115759880511822, 'f1-score': 0.911980750963098, 'support': 27619.0} | {'precision': 0.9371336709666482, 'recall': 0.9365654078713929, 'f1-score': 0.9359476526130199, 'support': 27619.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.2662 | {'precision': 0.8555657773689053, 'recall': 0.8916586768935763, 'f1-score': 0.8732394366197183, 'support': 1043.0} | {'precision': 0.9461304151624549, 'recall': 0.9667435158501441, 'f1-score': 0.9563259022749302, 'support': 17350.0} | {'precision': 0.9358246251703771, 'recall': 0.8930197268588771, 'f1-score': 0.9139212423738213, 'support': 9226.0} | 0.9393 | {'precision': 0.9125069392339125, 'recall': 0.9171406398675325, 'f1-score': 0.9144955270894899, 'support': 27619.0} | {'precision': 0.9392677432450942, 'recall': 0.9392809297947066, 'f1-score': 0.9390231550383894, 'support': 27619.0} |
|
84 |
+
| 0.1232 | 13.0 | 533 | 0.2681 | {'precision': 0.8646895273401297, 'recall': 0.8945349952061361, 'f1-score': 0.8793590951932139, 'support': 1043.0} | {'precision': 0.9548364966841985, 'recall': 0.9626512968299712, 'f1-score': 0.9587279719878308, 'support': 17350.0} | {'precision': 0.9293766578249337, 'recall': 0.9114459137220897, 'f1-score': 0.9203239575352961, 'support': 9226.0} | 0.9430 | {'precision': 0.9163008939497539, 'recall': 0.9228774019193989, 'f1-score': 0.9194703415721136, 'support': 27619.0} | {'precision': 0.9429274571700438, 'recall': 0.9429740396104132, 'f1-score': 0.9429020124731535, 'support': 27619.0} |
|
85 |
+
| 0.1232 | 14.0 | 574 | 0.2835 | {'precision': 0.8643592142188962, 'recall': 0.8859060402684564, 'f1-score': 0.875, 'support': 1043.0} | {'precision': 0.9461283248045886, 'recall': 0.9697406340057637, 'f1-score': 0.9577889733299177, 'support': 17350.0} | {'precision': 0.9405726018022128, 'recall': 0.8937784522003035, 'f1-score': 0.9165786694825766, 'support': 9226.0} | 0.9412 | {'precision': 0.9170200469418992, 'recall': 0.9164750421581745, 'f1-score': 0.9164558809374981, 'support': 27619.0} | {'precision': 0.9411845439739722, 'recall': 0.9411998986205149, 'f1-score': 0.9408964297013043, 'support': 27619.0} |
|
86 |
+
| 0.1232 | 15.0 | 615 | 0.2837 | {'precision': 0.8616822429906542, 'recall': 0.8839884947267498, 'f1-score': 0.872692853762423, 'support': 1043.0} | {'precision': 0.9506446299767138, 'recall': 0.9647262247838617, 'f1-score': 0.957633664216037, 'support': 17350.0} | {'precision': 0.9322299261910088, 'recall': 0.9035334923043572, 'f1-score': 0.9176574196389256, 'support': 9226.0} | 0.9412 | {'precision': 0.9148522663861257, 'recall': 0.9174160706049896, 'f1-score': 0.9159946458724618, 'support': 27619.0} | {'precision': 0.9411337198513156, 'recall': 0.9412361055794923, 'f1-score': 0.9410720907422853, 'support': 27619.0} |
|
87 |
|
88 |
|
89 |
### Framework versions
|
meta_data/README_s42_e10.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.939172308917774
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2166
|
36 |
+
- B: {'precision': 0.8636788048552755, 'recall': 0.8868648130393096, 'f1-score': 0.8751182592242194, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.948943661971831, 'recall': 0.9630547550432277, 'f1-score': 0.9559471365638768, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9289709172259508, 'recall': 0.9001734229351832, 'f1-score': 0.9143454805680943, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9392
|
40 |
+
- Macro avg: {'precision': 0.9138644613510191, 'recall': 0.9166976636725735, 'f1-score': 0.9151369587853968, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9390519284189125, 'recall': 0.939172308917774, 'f1-score': 0.9389978843359775, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 10
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2992 | {'precision': 0.7993138936535163, 'recall': 0.4467881112176414, 'f1-score': 0.5731857318573186, 'support': 1043.0} | {'precision': 0.8829387840233601, 'recall': 0.9759654178674352, 'f1-score': 0.9271243977222952, 'support': 17350.0} | {'precision': 0.9361160600661746, 'recall': 0.7973119445046607, 'f1-score': 0.8611566377897449, 'support': 9226.0} | 0.8963 | {'precision': 0.8727895792476836, 'recall': 0.7400218245299124, 'f1-score': 0.7871555891231196, 'support': 27619.0} | {'precision': 0.8975444101544748, 'recall': 0.8963032694883957, 'f1-score': 0.8917220811418657, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2008 | {'precision': 0.7899231426131511, 'recall': 0.8868648130393096, 'f1-score': 0.8355916892502258, 'support': 1043.0} | {'precision': 0.9329899761865205, 'recall': 0.9710086455331413, 'f1-score': 0.951619736210354, 'support': 17350.0} | {'precision': 0.9478012155881301, 'recall': 0.862020377194884, 'f1-score': 0.9028779020264517, 'support': 9226.0} | 0.9314 | {'precision': 0.8902381114626006, 'recall': 0.9066312785891116, 'f1-score': 0.8966964424956773, 'support': 27619.0} | {'precision': 0.9325348470110335, 'recall': 0.9314240196965857, 'f1-score': 0.9309560838275704, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1754 | {'precision': 0.8574144486692015, 'recall': 0.8648130393096836, 'f1-score': 0.8610978520286395, 'support': 1043.0} | {'precision': 0.9637472869126532, 'recall': 0.9469164265129683, 'f1-score': 0.9552577259644737, 'support': 17350.0} | {'precision': 0.9027310924369748, 'recall': 0.9314979406026447, 'f1-score': 0.916888936306412, 'support': 9226.0} | 0.9387 | {'precision': 0.9079642760062764, 'recall': 0.9144091354750987, 'f1-score': 0.9110815047665084, 'support': 27619.0} | {'precision': 0.9393495693805003, 'recall': 0.9386654114920888, 'f1-score': 0.9388849680116025, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1737 | {'precision': 0.8583732057416268, 'recall': 0.8600191754554171, 'f1-score': 0.8591954022988505, 'support': 1043.0} | {'precision': 0.9443852068017284, 'recall': 0.9699135446685879, 'f1-score': 0.9569791577810003, 'support': 17350.0} | {'precision': 0.9394631639063392, 'recall': 0.8915022761760243, 'f1-score': 0.9148545687114177, 'support': 9226.0} | 0.9396 | {'precision': 0.9140738588165648, 'recall': 0.9071449987666765, 'f1-score': 0.9103430429304228, 'support': 27619.0} | {'precision': 0.9394928759838658, 'recall': 0.9395705854665267, 'f1-score': 0.939214940549245, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2081 | {'precision': 0.8590225563909775, 'recall': 0.8763183125599233, 'f1-score': 0.8675842429995253, 'support': 1043.0} | {'precision': 0.9336273428886439, 'recall': 0.9761383285302594, 'f1-score': 0.9544096928712315, 'support': 17350.0} | {'precision': 0.9511586452762923, 'recall': 0.8675482332538478, 'f1-score': 0.9074315514993481, 'support': 9226.0} | 0.9361 | {'precision': 0.9146028481853046, 'recall': 0.9066682914480101, 'f1-score': 0.9098084957900351, 'support': 27619.0} | {'precision': 0.9366662292897221, 'recall': 0.9360947174046852, 'f1-score': 0.9354379967014502, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1913 | {'precision': 0.8325991189427313, 'recall': 0.9060402684563759, 'f1-score': 0.8677685950413224, 'support': 1043.0} | {'precision': 0.935986255057363, 'recall': 0.973371757925072, 'f1-score': 0.9543129997457125, 'support': 17350.0} | {'precision': 0.9491766378391185, 'recall': 0.8684153479297637, 'f1-score': 0.9070017546838739, 'support': 9226.0} | 0.9358 | {'precision': 0.9059206706130709, 'recall': 0.9159424581037373, 'f1-score': 0.9096944498236362, 'support': 27619.0} | {'precision': 0.9364881446470265, 'recall': 0.9357688547738875, 'f1-score': 0.9352406451692542, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1970 | {'precision': 0.8484848484848485, 'recall': 0.8859060402684564, 'f1-score': 0.8667917448405252, 'support': 1043.0} | {'precision': 0.9405801971326165, 'recall': 0.9680115273775216, 'f1-score': 0.9540987331704823, 'support': 17350.0} | {'precision': 0.9371685496887249, 'recall': 0.8810969000650336, 'f1-score': 0.9082681564245809, 'support': 9226.0} | 0.9359 | {'precision': 0.9087445317687299, 'recall': 0.9116714892370039, 'f1-score': 0.9097195448118628, 'support': 27619.0} | {'precision': 0.9359626762970696, 'recall': 0.9358774756508201, 'f1-score': 0.9354921909391984, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2042 | {'precision': 0.8507734303912647, 'recall': 0.8964525407478428, 'f1-score': 0.8730158730158729, 'support': 1043.0} | {'precision': 0.9413907099232364, 'recall': 0.96835734870317, 'f1-score': 0.9546836378100406, 'support': 17350.0} | {'precision': 0.9384296091317883, 'recall': 0.8821807934099285, 'f1-score': 0.9094362813565005, 'support': 9226.0} | 0.9369 | {'precision': 0.9101979164820965, 'recall': 0.9156635609536471, 'f1-score': 0.9123785973941381, 'support': 27619.0} | {'precision': 0.9369795097185314, 'recall': 0.936855063543213, 'f1-score': 0.9364848764747034, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2107 | {'precision': 0.8516483516483516, 'recall': 0.8916586768935763, 'f1-score': 0.8711943793911008, 'support': 1043.0} | {'precision': 0.9517556380245504, 'recall': 0.960806916426513, 'f1-score': 0.9562598594579091, 'support': 17350.0} | {'precision': 0.9260985352862849, 'recall': 0.9046173856492521, 'f1-score': 0.9152319333260226, 'support': 9226.0} | 0.9394 | {'precision': 0.9098341749863957, 'recall': 0.9190276596564471, 'f1-score': 0.9142287240583441, 'support': 27619.0} | {'precision': 0.9394045634181702, 'recall': 0.9394257576306166, 'f1-score': 0.9393422685892149, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2166 | {'precision': 0.8636788048552755, 'recall': 0.8868648130393096, 'f1-score': 0.8751182592242194, 'support': 1043.0} | {'precision': 0.948943661971831, 'recall': 0.9630547550432277, 'f1-score': 0.9559471365638768, 'support': 17350.0} | {'precision': 0.9289709172259508, 'recall': 0.9001734229351832, 'f1-score': 0.9143454805680943, 'support': 9226.0} | 0.9392 | {'precision': 0.9138644613510191, 'recall': 0.9166976636725735, 'f1-score': 0.9151369587853968, 'support': 27619.0} | {'precision': 0.9390519284189125, 'recall': 0.939172308917774, 'f1-score': 0.9389978843359775, 'support': 27619.0} |
|
82 |
+
|
83 |
+
|
84 |
+
### Framework versions
|
85 |
+
|
86 |
+
- Transformers 4.37.2
|
87 |
+
- Pytorch 2.2.0+cu121
|
88 |
+
- Datasets 2.17.0
|
89 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e11.md
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9396792063434593
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2269
|
36 |
+
- B: {'precision': 0.8579285059578369, 'recall': 0.8974113135186961, 'f1-score': 0.8772258669165884, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9460510739049551, 'recall': 0.9672622478386167, 'f1-score': 0.9565390863233492, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9369666628740471, 'recall': 0.8925861695209192, 'f1-score': 0.9142381348875936, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9397
|
40 |
+
- Macro avg: {'precision': 0.9136487475789464, 'recall': 0.9190865769594107, 'f1-score': 0.9160010293758437, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9396886199949656, 'recall': 0.9396792063434593, 'f1-score': 0.9394134747592979, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 11
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2961 | {'precision': 0.8145695364238411, 'recall': 0.3537871524448706, 'f1-score': 0.49331550802139046, 'support': 1043.0} | {'precision': 0.8821044947335489, 'recall': 0.9750432276657061, 'f1-score': 0.9262483574244416, 'support': 17350.0} | {'precision': 0.9316474712068102, 'recall': 0.8066334272707566, 'f1-score': 0.8646450563494831, 'support': 9226.0} | 0.8953 | {'precision': 0.8761071674547334, 'recall': 0.7118212691271112, 'f1-score': 0.7614029739317717, 'support': 27619.0} | {'precision': 0.8961037177114005, 'recall': 0.8953256815960028, 'f1-score': 0.8893208431174447, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2112 | {'precision': 0.7865546218487395, 'recall': 0.8974113135186961, 'f1-score': 0.83833407971339, 'support': 1043.0} | {'precision': 0.924818593485733, 'recall': 0.9770028818443804, 'f1-score': 0.9501947924549455, 'support': 17350.0} | {'precision': 0.9595061728395061, 'recall': 0.8424019076522871, 'f1-score': 0.8971487937204201, 'support': 9226.0} | 0.9290 | {'precision': 0.8902931293913262, 'recall': 0.905605367671788, 'f1-score': 0.8952258886295853, 'support': 27619.0} | {'precision': 0.931184438907382, 'recall': 0.9290343604040696, 'f1-score': 0.9282507283065632, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1780 | {'precision': 0.8625954198473282, 'recall': 0.8667305848513902, 'f1-score': 0.8646580583452893, 'support': 1043.0} | {'precision': 0.9670896584440227, 'recall': 0.94, 'f1-score': 0.9533524288303034, 'support': 17350.0} | {'precision': 0.8919336561244463, 'recall': 0.9384348580099718, 'f1-score': 0.9145935667881477, 'support': 9226.0} | 0.9367 | {'precision': 0.9072062448052658, 'recall': 0.9150551476204539, 'f1-score': 0.9108680179879135, 'support': 27619.0} | {'precision': 0.9380380357112386, 'recall': 0.9367102357073029, 'f1-score': 0.9370557674878652, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1855 | {'precision': 0.845719661335842, 'recall': 0.8619367209971237, 'f1-score': 0.8537511870845204, 'support': 1043.0} | {'precision': 0.9384769427601936, 'recall': 0.9723919308357348, 'f1-score': 0.9551334673196139, 'support': 17350.0} | {'precision': 0.9438162956055485, 'recall': 0.8776284413613701, 'f1-score': 0.909519797809604, 'support': 9226.0} | 0.9366 | {'precision': 0.9093376332338613, 'recall': 0.9039856977314096, 'f1-score': 0.9061348174045795, 'support': 27619.0} | {'precision': 0.9367576562120075, 'recall': 0.9365654078713929, 'f1-score': 0.9360678446256514, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.1980 | {'precision': 0.8513761467889909, 'recall': 0.8897411313518696, 'f1-score': 0.8701359587435537, 'support': 1043.0} | {'precision': 0.9398964883966832, 'recall': 0.9734293948126801, 'f1-score': 0.9563690931226817, 'support': 17350.0} | {'precision': 0.9483644859813084, 'recall': 0.8799046173856493, 'f1-score': 0.9128528055774204, 'support': 9226.0} | 0.9390 | {'precision': 0.9132123737223274, 'recall': 0.9143583811833996, 'f1-score': 0.9131192858145519, 'support': 27619.0} | {'precision': 0.9393823144374135, 'recall': 0.939027481081864, 'f1-score': 0.9385761814296439, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1904 | {'precision': 0.828695652173913, 'recall': 0.9137104506232023, 'f1-score': 0.8691290469676243, 'support': 1043.0} | {'precision': 0.9355999778503793, 'recall': 0.9738328530259366, 'f1-score': 0.9543336439888163, 'support': 17350.0} | {'precision': 0.9504161712247324, 'recall': 0.8663559505744635, 'f1-score': 0.906441369925153, 'support': 9226.0} | 0.9357 | {'precision': 0.9049039337496749, 'recall': 0.917966418074534, 'f1-score': 0.9099680202938645, 'support': 27619.0} | {'precision': 0.9365121393475814, 'recall': 0.935660233896955, 'f1-score': 0.9351177956523646, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1881 | {'precision': 0.8546296296296296, 'recall': 0.8849472674976031, 'f1-score': 0.8695242581252944, 'support': 1043.0} | {'precision': 0.9503849443969205, 'recall': 0.9605187319884726, 'f1-score': 0.9554249677511824, 'support': 17350.0} | {'precision': 0.9249222567747668, 'recall': 0.9026663776284414, 'f1-score': 0.9136588041689524, 'support': 9226.0} | 0.9383 | {'precision': 0.909978943600439, 'recall': 0.916044125704839, 'f1-score': 0.9128693433484765, 'support': 27619.0} | {'precision': 0.9382631605052417, 'recall': 0.9383395488612911, 'f1-score': 0.9382292305648449, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2086 | {'precision': 0.8519195612431444, 'recall': 0.8935762224352828, 'f1-score': 0.8722508189050071, 'support': 1043.0} | {'precision': 0.9404728634508971, 'recall': 0.9697982708933718, 'f1-score': 0.9549104735960954, 'support': 17350.0} | {'precision': 0.940699559879546, 'recall': 0.8803381747236072, 'f1-score': 0.9095184770436731, 'support': 9226.0} | 0.9370 | {'precision': 0.9110306615245292, 'recall': 0.9145708893507539, 'f1-score': 0.9122265898482586, 'support': 27619.0} | {'precision': 0.937204476001968, 'recall': 0.9370360983381005, 'f1-score': 0.9366259383111303, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2106 | {'precision': 0.8523245214220602, 'recall': 0.8964525407478428, 'f1-score': 0.8738317757009345, 'support': 1043.0} | {'precision': 0.9503868912152936, 'recall': 0.9627665706051873, 'f1-score': 0.9565366775468134, 'support': 17350.0} | {'precision': 0.929689246590655, 'recall': 0.9014740949490571, 'f1-score': 0.9153642967202289, 'support': 9226.0} | 0.9398 | {'precision': 0.9108002197426696, 'recall': 0.9202310687673624, 'f1-score': 0.9152442499893256, 'support': 27619.0} | {'precision': 0.9397697247356507, 'recall': 0.9397878272203918, 'f1-score': 0.9396599767925747, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2272 | {'precision': 0.8681214421252372, 'recall': 0.8772770853307766, 'f1-score': 0.8726752503576539, 'support': 1043.0} | {'precision': 0.9461503725445924, 'recall': 0.9661095100864553, 'f1-score': 0.9560257799577938, 'support': 17350.0} | {'precision': 0.932873771047576, 'recall': 0.8947539562107089, 'f1-score': 0.9134163208852005, 'support': 9226.0} | 0.9389 | {'precision': 0.9157151952391351, 'recall': 0.9127135172093136, 'f1-score': 0.9140391170668827, 'support': 27619.0} | {'precision': 0.938768711375149, 'recall': 0.9389188602049314, 'f1-score': 0.938644648425997, 'support': 27619.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.2269 | {'precision': 0.8579285059578369, 'recall': 0.8974113135186961, 'f1-score': 0.8772258669165884, 'support': 1043.0} | {'precision': 0.9460510739049551, 'recall': 0.9672622478386167, 'f1-score': 0.9565390863233492, 'support': 17350.0} | {'precision': 0.9369666628740471, 'recall': 0.8925861695209192, 'f1-score': 0.9142381348875936, 'support': 9226.0} | 0.9397 | {'precision': 0.9136487475789464, 'recall': 0.9190865769594107, 'f1-score': 0.9160010293758437, 'support': 27619.0} | {'precision': 0.9396886199949656, 'recall': 0.9396792063434593, 'f1-score': 0.9394134747592979, 'support': 27619.0} |
|
83 |
+
|
84 |
+
|
85 |
+
### Framework versions
|
86 |
+
|
87 |
+
- Transformers 4.37.2
|
88 |
+
- Pytorch 2.2.0+cu121
|
89 |
+
- Datasets 2.17.0
|
90 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e12.md
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9414895542923349
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2394
|
36 |
+
- B: {'precision': 0.8633828996282528, 'recall': 0.8906999041227229, 'f1-score': 0.8768286927796131, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9488209014307527, 'recall': 0.9670317002881844, 'f1-score': 0.9578397510918277, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9363431151241535, 'recall': 0.8991979189247779, 'f1-score': 0.917394669910428, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9415
|
40 |
+
- Macro avg: {'precision': 0.9161823053943863, 'recall': 0.9189765077785618, 'f1-score': 0.9173543712606228, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.941426285682728, 'recall': 0.9414895542923349, 'f1-score': 0.9412699675080908, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 12
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2858 | {'precision': 0.8085106382978723, 'recall': 0.36433365292425696, 'f1-score': 0.5023132848645077, 'support': 1043.0} | {'precision': 0.8888126286890872, 'recall': 0.9703170028818444, 'f1-score': 0.9277782370284644, 'support': 17350.0} | {'precision': 0.9216617933723197, 'recall': 0.8199653154129634, 'f1-score': 0.8678444418951474, 'support': 9226.0} | 0.8972 | {'precision': 0.8729950201197597, 'recall': 0.7182053237396883, 'f1-score': 0.7659786545960398, 'support': 27619.0} | {'precision': 0.8967532281818084, 'recall': 0.8972084434628336, 'f1-score': 0.8916904301199235, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2181 | {'precision': 0.7889273356401384, 'recall': 0.8744007670182167, 'f1-score': 0.8294679399727148, 'support': 1043.0} | {'precision': 0.921101216333623, 'recall': 0.9776945244956773, 'f1-score': 0.9485544930939999, 'support': 17350.0} | {'precision': 0.9581210388964831, 'recall': 0.8356817689139389, 'f1-score': 0.8927227464829502, 'support': 9226.0} | 0.9264 | {'precision': 0.8893831969567482, 'recall': 0.8959256868092776, 'f1-score': 0.8902483931832217, 'support': 27619.0} | {'precision': 0.9284761222100719, 'recall': 0.9263550454397336, 'f1-score': 0.9254069870605068, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1805 | {'precision': 0.8276785714285714, 'recall': 0.8887823585810163, 'f1-score': 0.8571428571428571, 'support': 1043.0} | {'precision': 0.9608084358523726, 'recall': 0.9453025936599424, 'f1-score': 0.952992446252179, 'support': 17350.0} | {'precision': 0.9023226216990137, 'recall': 0.9221764578365489, 'f1-score': 0.9121415170195658, 'support': 9226.0} | 0.9354 | {'precision': 0.8969365429933193, 'recall': 0.9187538033591692, 'f1-score': 0.9074256068048673, 'support': 27619.0} | {'precision': 0.9362440211388452, 'recall': 0.9354429921430899, 'f1-score': 0.9357267308192845, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1988 | {'precision': 0.8492366412213741, 'recall': 0.8533077660594439, 'f1-score': 0.8512673362027738, 'support': 1043.0} | {'precision': 0.9303964757709251, 'recall': 0.9738328530259366, 'f1-score': 0.9516192621796676, 'support': 17350.0} | {'precision': 0.9456663892521697, 'recall': 0.8621287665293735, 'f1-score': 0.9019674547825594, 'support': 9226.0} | 0.9320 | {'precision': 0.9084331687481564, 'recall': 0.8964231285382512, 'f1-score': 0.9016180177216668, 'support': 27619.0} | {'precision': 0.9324324116970188, 'recall': 0.9319671240812484, 'f1-score': 0.9312436282378297, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2100 | {'precision': 0.8506375227686703, 'recall': 0.8954937679769894, 'f1-score': 0.8724894908921066, 'support': 1043.0} | {'precision': 0.9365704772475028, 'recall': 0.9727377521613833, 'f1-score': 0.9543115634718687, 'support': 17350.0} | {'precision': 0.9460063521938595, 'recall': 0.8716670279644483, 'f1-score': 0.9073165228182998, 'support': 9226.0} | 0.9361 | {'precision': 0.9110714507366775, 'recall': 0.9132995160342737, 'f1-score': 0.9113725257274249, 'support': 27619.0} | {'precision': 0.9364773279927747, 'recall': 0.9360585104457076, 'f1-score': 0.9355231690053595, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.2054 | {'precision': 0.8465073529411765, 'recall': 0.8830297219558965, 'f1-score': 0.8643829188174565, 'support': 1043.0} | {'precision': 0.9236811957885549, 'recall': 0.9759077809798271, 'f1-score': 0.94907653933466, 'support': 17350.0} | {'precision': 0.9496341463414634, 'recall': 0.8440277476696293, 'f1-score': 0.8937220245610008, 'support': 9226.0} | 0.9283 | {'precision': 0.9066075650237316, 'recall': 0.9009884168684509, 'f1-score': 0.9023938275710391, 'support': 27619.0} | {'precision': 0.929436277569623, 'recall': 0.9283464281834969, 'f1-score': 0.9273872602332724, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1949 | {'precision': 0.851063829787234, 'recall': 0.8820709491850431, 'f1-score': 0.8662900188323918, 'support': 1043.0} | {'precision': 0.9430497051390059, 'recall': 0.9677809798270893, 'f1-score': 0.9552552979661499, 'support': 17350.0} | {'precision': 0.9366769724035269, 'recall': 0.8866247561239974, 'f1-score': 0.910963862130408, 'support': 9226.0} | 0.9374 | {'precision': 0.9102635024432556, 'recall': 0.9121588950453766, 'f1-score': 0.9108363929763166, 'support': 27619.0} | {'precision': 0.9374471815063824, 'recall': 0.9374343748868532, 'f1-score': 0.937100275222493, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2038 | {'precision': 0.8602050326188257, 'recall': 0.8849472674976031, 'f1-score': 0.8724007561436674, 'support': 1043.0} | {'precision': 0.9485302462830553, 'recall': 0.9634005763688761, 'f1-score': 0.9559075832094247, 'support': 17350.0} | {'precision': 0.9286194531600179, 'recall': 0.8982224149143724, 'f1-score': 0.913168044077135, 'support': 9226.0} | 0.9387 | {'precision': 0.9124515773539663, 'recall': 0.9155234195936172, 'f1-score': 0.913825461143409, 'support': 27619.0} | {'precision': 0.938543636514239, 'recall': 0.9386654114920888, 'f1-score': 0.9384770966362654, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2182 | {'precision': 0.8558310376492194, 'recall': 0.8935762224352828, 'f1-score': 0.874296435272045, 'support': 1043.0} | {'precision': 0.9498548579885024, 'recall': 0.9618443804034582, 'f1-score': 0.9558120221083077, 'support': 17350.0} | {'precision': 0.9273518580515567, 'recall': 0.9007153696076307, 'f1-score': 0.9138395557266179, 'support': 9226.0} | 0.9388 | {'precision': 0.9110125845630929, 'recall': 0.9187119908154573, 'f1-score': 0.9146493377023236, 'support': 27619.0} | {'precision': 0.9387871320740184, 'recall': 0.9388464462869763, 'f1-score': 0.9387129695753523, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2523 | {'precision': 0.861652739090065, 'recall': 0.8897411313518696, 'f1-score': 0.8754716981132075, 'support': 1043.0} | {'precision': 0.938376753507014, 'recall': 0.9715850144092218, 'f1-score': 0.9546921900662626, 'support': 17350.0} | {'precision': 0.9432268594077874, 'recall': 0.8769781053544331, 'f1-score': 0.9088968771062682, 'support': 9226.0} | 0.9369 | {'precision': 0.9144187840016221, 'recall': 0.9127680837051749, 'f1-score': 0.9130202550952461, 'support': 27619.0} | {'precision': 0.9370995142877685, 'recall': 0.9368912705021906, 'f1-score': 0.9364028048431936, 'support': 27619.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.2504 | {'precision': 0.8530762167125804, 'recall': 0.8906999041227229, 'f1-score': 0.8714821763602252, 'support': 1043.0} | {'precision': 0.9388493211662586, 'recall': 0.972507204610951, 'f1-score': 0.9553819149538532, 'support': 17350.0} | {'precision': 0.9457817247020331, 'recall': 0.8773032733579016, 'f1-score': 0.9102564102564102, 'support': 9226.0} | 0.9376 | {'precision': 0.9125690875269573, 'recall': 0.9135034606971919, 'f1-score': 0.9123735005234962, 'support': 27619.0} | {'precision': 0.9379259353476508, 'recall': 0.9376154096817408, 'f1-score': 0.9371395696954526, 'support': 27619.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.2394 | {'precision': 0.8633828996282528, 'recall': 0.8906999041227229, 'f1-score': 0.8768286927796131, 'support': 1043.0} | {'precision': 0.9488209014307527, 'recall': 0.9670317002881844, 'f1-score': 0.9578397510918277, 'support': 17350.0} | {'precision': 0.9363431151241535, 'recall': 0.8991979189247779, 'f1-score': 0.917394669910428, 'support': 9226.0} | 0.9415 | {'precision': 0.9161823053943863, 'recall': 0.9189765077785618, 'f1-score': 0.9173543712606228, 'support': 27619.0} | {'precision': 0.941426285682728, 'recall': 0.9414895542923349, 'f1-score': 0.9412699675080908, 'support': 27619.0} |
|
84 |
+
|
85 |
+
|
86 |
+
### Framework versions
|
87 |
+
|
88 |
+
- Transformers 4.37.2
|
89 |
+
- Pytorch 2.2.0+cu121
|
90 |
+
- Datasets 2.17.0
|
91 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e13.md
ADDED
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9388464462869763
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2612
|
36 |
+
- B: {'precision': 0.8656716417910447, 'recall': 0.8897411313518696, 'f1-score': 0.8775413711583925, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9471240942028986, 'recall': 0.9642651296829972, 'f1-score': 0.9556177528988404, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9312169312169312, 'recall': 0.8965965748970302, 'f1-score': 0.9135788834281297, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9388
|
40 |
+
- Macro avg: {'precision': 0.9146708890702916, 'recall': 0.9168676119772989, 'f1-score': 0.9155793358284542, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9387344206602614, 'recall': 0.9388464462869763, 'f1-score': 0.9386263963728234, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 13
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2988 | {'precision': 0.8082474226804124, 'recall': 0.37583892617449666, 'f1-score': 0.513089005235602, 'support': 1043.0} | {'precision': 0.8794226460319942, 'recall': 0.9727377521613833, 'f1-score': 0.9237295093183, 'support': 17350.0} | {'precision': 0.9257207604179781, 'recall': 0.7969867765011923, 'f1-score': 0.856543770749607, 'support': 9226.0} | 0.8915 | {'precision': 0.8711302763767949, 'recall': 0.715187818279024, 'f1-score': 0.7644540951011697, 'support': 27619.0} | {'precision': 0.8922004672916121, 'recall': 0.8914877439443861, 'f1-score': 0.885779052393972, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.1948 | {'precision': 0.7984293193717278, 'recall': 0.8772770853307766, 'f1-score': 0.8359981726815899, 'support': 1043.0} | {'precision': 0.9427846674182638, 'recall': 0.9639769452449568, 'f1-score': 0.9532630379025364, 'support': 17350.0} | {'precision': 0.9346158250314898, 'recall': 0.8846737481031867, 'f1-score': 0.9089592961746199, 'support': 9226.0} | 0.9342 | {'precision': 0.8919432706071605, 'recall': 0.9086425928929733, 'f1-score': 0.8994068355862487, 'support': 27619.0} | {'precision': 0.9346044882708322, 'recall': 0.9342119555378544, 'f1-score': 0.9340352028756634, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1740 | {'precision': 0.848089468779124, 'recall': 0.87248322147651, 'f1-score': 0.8601134215500945, 'support': 1043.0} | {'precision': 0.9581905812670577, 'recall': 0.9510662824207493, 'f1-score': 0.9546151398571056, 'support': 17350.0} | {'precision': 0.9091689008042896, 'recall': 0.9189247778018643, 'f1-score': 0.9140208075036386, 'support': 9226.0} | 0.9374 | {'precision': 0.9051496502834904, 'recall': 0.9141580938997079, 'f1-score': 0.9095831229702794, 'support': 27619.0} | {'precision': 0.937657271434174, 'recall': 0.9373619609688982, 'f1-score': 0.9374860402341179, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1780 | {'precision': 0.8725868725868726, 'recall': 0.8667305848513902, 'f1-score': 0.8696488696488696, 'support': 1043.0} | {'precision': 0.9619125269349484, 'recall': 0.9519884726224784, 'f1-score': 0.9569247704295936, 'support': 17350.0} | {'precision': 0.9102209944751382, 'recall': 0.9285714285714286, 'f1-score': 0.9193046464212898, 'support': 9226.0} | 0.9409 | {'precision': 0.9149067979989863, 'recall': 0.9157634953484323, 'f1-score': 0.9152927621665844, 'support': 27619.0} | {'precision': 0.9412719267698718, 'recall': 0.9409464499076723, 'f1-score': 0.9410620661819779, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.1934 | {'precision': 0.8405017921146953, 'recall': 0.8993288590604027, 'f1-score': 0.8689207966651227, 'support': 1043.0} | {'precision': 0.9406415620641562, 'recall': 0.9718155619596541, 'f1-score': 0.9559744861800141, 'support': 17350.0} | {'precision': 0.9460247143856377, 'recall': 0.8795794493821808, 'f1-score': 0.9115929004718041, 'support': 9226.0} | 0.9383 | {'precision': 0.9090560228548297, 'recall': 0.9169079568007459, 'f1-score': 0.9121627277723136, 'support': 27619.0} | {'precision': 0.9386581152797215, 'recall': 0.9382671349433361, 'f1-score': 0.93786153828516, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.2013 | {'precision': 0.8481481481481481, 'recall': 0.87823585810163, 'f1-score': 0.8629298162976919, 'support': 1043.0} | {'precision': 0.9306275504577037, 'recall': 0.9726801152737752, 'f1-score': 0.9511892684026604, 'support': 17350.0} | {'precision': 0.9445568114217727, 'recall': 0.8605029265120312, 'f1-score': 0.9005728546310476, 'support': 9226.0} | 0.9316 | {'precision': 0.9077775033425416, 'recall': 0.9038062999624789, 'f1-score': 0.9048973131104666, 'support': 27619.0} | {'precision': 0.9321658156029167, 'recall': 0.9316412614504508, 'f1-score': 0.9309480706039573, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.2083 | {'precision': 0.8447488584474886, 'recall': 0.8868648130393096, 'f1-score': 0.8652946679139383, 'support': 1043.0} | {'precision': 0.940964601271878, 'recall': 0.9636887608069165, 'f1-score': 0.9521911216150801, 'support': 17350.0} | {'precision': 0.9294117647058824, 'recall': 0.8819640147409495, 'f1-score': 0.9050664590400979, 'support': 9226.0} | 0.9335 | {'precision': 0.9050417414750829, 'recall': 0.9108391961957253, 'f1-score': 0.9075174161897054, 'support': 27619.0} | {'precision': 0.933471951649382, 'recall': 0.933487816358304, 'f1-score': 0.9331677993323372, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2452 | {'precision': 0.8446251129177959, 'recall': 0.8964525407478428, 'f1-score': 0.8697674418604651, 'support': 1043.0} | {'precision': 0.9315356136376195, 'recall': 0.9716426512968299, 'f1-score': 0.9511665303128614, 'support': 17350.0} | {'precision': 0.9429590017825312, 'recall': 0.8600693691740733, 'f1-score': 0.8996088657105606, 'support': 9226.0} | 0.9315 | {'precision': 0.9063732427793155, 'recall': 0.9093881870729152, 'f1-score': 0.9068476126279624, 'support': 27619.0} | {'precision': 0.932069468113675, 'recall': 0.9315326405735183, 'f1-score': 0.9308699858008703, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2303 | {'precision': 0.8538812785388128, 'recall': 0.8964525407478428, 'f1-score': 0.8746492048643593, 'support': 1043.0} | {'precision': 0.9594571080563773, 'recall': 0.9534293948126801, 'f1-score': 0.9564337544447978, 'support': 17350.0} | {'precision': 0.9145750296240439, 'recall': 0.9202254498157382, 'f1-score': 0.9173915392511751, 'support': 9226.0} | 0.9402 | {'precision': 0.909304472073078, 'recall': 0.9233691284587536, 'f1-score': 0.9161581661867775, 'support': 27619.0} | {'precision': 0.9404775053986587, 'recall': 0.9401861037691445, 'f1-score': 0.9403033817814589, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2620 | {'precision': 0.8548983364140481, 'recall': 0.8868648130393096, 'f1-score': 0.8705882352941177, 'support': 1043.0} | {'precision': 0.9359622327131353, 'recall': 0.9712968299711816, 'f1-score': 0.9533022203365862, 'support': 17350.0} | {'precision': 0.9423347398030942, 'recall': 0.8714502492954693, 'f1-score': 0.9055073769568645, 'support': 9226.0} | 0.9348 | {'precision': 0.9110651029767592, 'recall': 0.9098706307686535, 'f1-score': 0.9097992775291894, 'support': 27619.0} | {'precision': 0.9350296539294, 'recall': 0.9347550599225171, 'f1-score': 0.9342129733898971, 'support': 27619.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.2666 | {'precision': 0.84967919340055, 'recall': 0.8887823585810163, 'f1-score': 0.8687910028116214, 'support': 1043.0} | {'precision': 0.9369943477779009, 'recall': 0.9745821325648415, 'f1-score': 0.9554186913775569, 'support': 17350.0} | {'precision': 0.9489507191700071, 'recall': 0.8724257533058747, 'f1-score': 0.9090806415179579, 'support': 9226.0} | 0.9372 | {'precision': 0.911874753449486, 'recall': 0.9119300814839107, 'f1-score': 0.9110967785690454, 'support': 27619.0} | {'precision': 0.93769096157449, 'recall': 0.9372171331329882, 'f1-score': 0.9366682830652019, 'support': 27619.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.2533 | {'precision': 0.859925788497217, 'recall': 0.8887823585810163, 'f1-score': 0.8741159830268741, 'support': 1043.0} | {'precision': 0.947344555914527, 'recall': 0.963342939481268, 'f1-score': 0.9552767696396423, 'support': 17350.0} | {'precision': 0.9294223420993482, 'recall': 0.8963797962280512, 'f1-score': 0.9126020745972192, 'support': 9226.0} | 0.9382 | {'precision': 0.9122308955036974, 'recall': 0.9161683647634451, 'f1-score': 0.9139982757545786, 'support': 27619.0} | {'precision': 0.9380564528305959, 'recall': 0.9381585140664036, 'f1-score': 0.9379565394756785, 'support': 27619.0} |
|
84 |
+
| 0.1259 | 13.0 | 533 | 0.2612 | {'precision': 0.8656716417910447, 'recall': 0.8897411313518696, 'f1-score': 0.8775413711583925, 'support': 1043.0} | {'precision': 0.9471240942028986, 'recall': 0.9642651296829972, 'f1-score': 0.9556177528988404, 'support': 17350.0} | {'precision': 0.9312169312169312, 'recall': 0.8965965748970302, 'f1-score': 0.9135788834281297, 'support': 9226.0} | 0.9388 | {'precision': 0.9146708890702916, 'recall': 0.9168676119772989, 'f1-score': 0.9155793358284542, 'support': 27619.0} | {'precision': 0.9387344206602614, 'recall': 0.9388464462869763, 'f1-score': 0.9386263963728234, 'support': 27619.0} |
|
85 |
+
|
86 |
+
|
87 |
+
### Framework versions
|
88 |
+
|
89 |
+
- Transformers 4.37.2
|
90 |
+
- Pytorch 2.2.0+cu121
|
91 |
+
- Datasets 2.17.0
|
92 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e14.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9430826604873457
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2608
|
36 |
+
- B: {'precision': 0.8667287977632805, 'recall': 0.8916586768935763, 'f1-score': 0.8790170132325141, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9495959767192179, 'recall': 0.9685878962536023, 'f1-score': 0.9589979170827745, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.939315176856142, 'recall': 0.9009321482766096, 'f1-score': 0.9197233748271093, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9431
|
40 |
+
- Macro avg: {'precision': 0.9185466504462134, 'recall': 0.9203929071412628, 'f1-score': 0.9192461017141326, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9430323383837321, 'recall': 0.9430826604873457, 'f1-score': 0.9428580492538672, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 14
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2982 | {'precision': 0.8073929961089494, 'recall': 0.39789069990412274, 'f1-score': 0.5330764290301863, 'support': 1043.0} | {'precision': 0.881321540062435, 'recall': 0.9763112391930836, 'f1-score': 0.9263877495214657, 'support': 17350.0} | {'precision': 0.93570069752695, 'recall': 0.7996965098634294, 'f1-score': 0.8623692361638712, 'support': 9226.0} | 0.8955 | {'precision': 0.8748050778994448, 'recall': 0.724632816320212, 'f1-score': 0.773944471571841, 'support': 27619.0} | {'precision': 0.8966948206093096, 'recall': 0.8954705094319129, 'f1-score': 0.8901497064529414, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2000 | {'precision': 0.7943201376936316, 'recall': 0.8849472674976031, 'f1-score': 0.83718820861678, 'support': 1043.0} | {'precision': 0.9348308374930671, 'recall': 0.9714697406340058, 'f1-score': 0.9527981910684004, 'support': 17350.0} | {'precision': 0.9481428740951703, 'recall': 0.866030782570995, 'f1-score': 0.9052285730470742, 'support': 9226.0} | 0.9330 | {'precision': 0.8924312830939564, 'recall': 0.9074825969008679, 'f1-score': 0.8984049909107515, 'support': 27619.0} | {'precision': 0.9339714359868645, 'recall': 0.9329809189326188, 'f1-score': 0.9325418998354885, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1755 | {'precision': 0.8634192932187201, 'recall': 0.8667305848513902, 'f1-score': 0.8650717703349282, 'support': 1043.0} | {'precision': 0.9657913330193123, 'recall': 0.9454178674351585, 'f1-score': 0.9554960097862178, 'support': 17350.0} | {'precision': 0.9005006257822278, 'recall': 0.9358335139822241, 'f1-score': 0.9178271499946848, 'support': 9226.0} | 0.9392 | {'precision': 0.9099037506734201, 'recall': 0.9159939887562576, 'f1-score': 0.9127983100386103, 'support': 27619.0} | {'precision': 0.9401153091777049, 'recall': 0.939244722835729, 'f1-score': 0.9394981321590635, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1881 | {'precision': 0.8497164461247637, 'recall': 0.8619367209971237, 'f1-score': 0.8557829604950024, 'support': 1043.0} | {'precision': 0.9385960028948394, 'recall': 0.9717579250720461, 'f1-score': 0.9548891343131425, 'support': 17350.0} | {'precision': 0.9423121656199116, 'recall': 0.8781703880338174, 'f1-score': 0.9091113105924596, 'support': 9226.0} | 0.9363 | {'precision': 0.9102082048798383, 'recall': 0.9039550113676623, 'f1-score': 0.9065944684668681, 'support': 27619.0} | {'precision': 0.9364809349919582, 'recall': 0.9363481661175278, 'f1-score': 0.9358546312196439, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2061 | {'precision': 0.8460846084608461, 'recall': 0.9012464046021093, 'f1-score': 0.872794800371402, 'support': 1043.0} | {'precision': 0.9360735652559273, 'recall': 0.9739481268011527, 'f1-score': 0.9546353313372126, 'support': 17350.0} | {'precision': 0.9493850520340587, 'recall': 0.8701495772815955, 'f1-score': 0.9080420766881574, 'support': 9226.0} | 0.9365 | {'precision': 0.9105144085836107, 'recall': 0.9151147028949524, 'f1-score': 0.9118240694655907, 'support': 27619.0} | {'precision': 0.9371218760230721, 'recall': 0.9365292009124153, 'f1-score': 0.9359804545788388, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1957 | {'precision': 0.844061650045331, 'recall': 0.8926174496644296, 'f1-score': 0.8676607642124884, 'support': 1043.0} | {'precision': 0.9341138786104821, 'recall': 0.9748703170028818, 'f1-score': 0.9540570268212201, 'support': 17350.0} | {'precision': 0.9499345938875015, 'recall': 0.865814003902016, 'f1-score': 0.9059257159058689, 'support': 9226.0} | 0.9353 | {'precision': 0.9093700408477714, 'recall': 0.9111005901897758, 'f1-score': 0.9092145023131923, 'support': 27619.0} | {'precision': 0.9359979962379243, 'recall': 0.9353343712661574, 'f1-score': 0.9347163274329027, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1979 | {'precision': 0.8534562211981567, 'recall': 0.887823585810163, 'f1-score': 0.8703007518796992, 'support': 1043.0} | {'precision': 0.9429021904386509, 'recall': 0.9651296829971182, 'f1-score': 0.9538864678572446, 'support': 17350.0} | {'precision': 0.9317378917378918, 'recall': 0.8861911987860395, 'f1-score': 0.908393978112327, 'support': 9226.0} | 0.9358 | {'precision': 0.9093654344582331, 'recall': 0.9130481558644402, 'f1-score': 0.9108603992830903, 'support': 27619.0} | {'precision': 0.9357949828738935, 'recall': 0.9358412686918426, 'f1-score': 0.9355333916361219, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2078 | {'precision': 0.8595194085027726, 'recall': 0.8916586768935763, 'f1-score': 0.8752941176470588, 'support': 1043.0} | {'precision': 0.9472519993193806, 'recall': 0.9625936599423631, 'f1-score': 0.9548612103713445, 'support': 17350.0} | {'precision': 0.9278014821468673, 'recall': 0.8956210708866248, 'f1-score': 0.9114273108316787, 'support': 9226.0} | 0.9375 | {'precision': 0.9115242966563403, 'recall': 0.9166244692408547, 'f1-score': 0.9138608796166939, 'support': 27619.0} | {'precision': 0.9374415223413826, 'recall': 0.9375429957637857, 'f1-score': 0.9373475554647807, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2136 | {'precision': 0.8539944903581267, 'recall': 0.8916586768935763, 'f1-score': 0.8724202626641651, 'support': 1043.0} | {'precision': 0.9485901936360548, 'recall': 0.9656484149855907, 'f1-score': 0.9570432994401918, 'support': 17350.0} | {'precision': 0.934596301308074, 'recall': 0.8983308042488619, 'f1-score': 0.9161047861169449, 'support': 9226.0} | 0.9404 | {'precision': 0.9123936617674184, 'recall': 0.9185459653760096, 'f1-score': 0.9151894494071007, 'support': 27619.0} | {'precision': 0.9403432995002486, 'recall': 0.940367138564032, 'f1-score': 0.9401722848749406, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2702 | {'precision': 0.8539944903581267, 'recall': 0.8916586768935763, 'f1-score': 0.8724202626641651, 'support': 1043.0} | {'precision': 0.9347357959251283, 'recall': 0.9757348703170029, 'f1-score': 0.9547954090409182, 'support': 17350.0} | {'precision': 0.9510630716237083, 'recall': 0.8678734012573163, 'f1-score': 0.9075658826863134, 'support': 9226.0} | 0.9365 | {'precision': 0.9132644526356545, 'recall': 0.9117556494892985, 'f1-score': 0.9115938514637989, 'support': 27619.0} | {'precision': 0.9371407441089408, 'recall': 0.9365292009124153, 'f1-score': 0.9359077995033341, 'support': 27619.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.2582 | {'precision': 0.852157943067034, 'recall': 0.8897411313518696, 'f1-score': 0.8705440900562851, 'support': 1043.0} | {'precision': 0.9372609876406363, 'recall': 0.9746974063400576, 'f1-score': 0.9556126917752098, 'support': 17350.0} | {'precision': 0.9494521032166844, 'recall': 0.87340125731628, 'f1-score': 0.9098402303392988, 'support': 9226.0} | 0.9377 | {'precision': 0.9129570113081181, 'recall': 0.9126132650027358, 'f1-score': 0.9119990040569311, 'support': 27619.0} | {'precision': 0.9381195544538573, 'recall': 0.9376516166407184, 'f1-score': 0.9371100928107088, 'support': 27619.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.2540 | {'precision': 0.8534798534798534, 'recall': 0.8935762224352828, 'f1-score': 0.8730679156908665, 'support': 1043.0} | {'precision': 0.944104607441495, 'recall': 0.9696253602305476, 'f1-score': 0.9566948164576758, 'support': 17350.0} | {'precision': 0.9408589802480478, 'recall': 0.8880338174723608, 'f1-score': 0.9136835061893611, 'support': 9226.0} | 0.9395 | {'precision': 0.9128144803897987, 'recall': 0.9170784667127304, 'f1-score': 0.9144820794459679, 'support': 27619.0} | {'precision': 0.9395980802367181, 'recall': 0.9394981715485716, 'f1-score': 0.9391690115394944, 'support': 27619.0} |
|
84 |
+
| 0.1251 | 13.0 | 533 | 0.2575 | {'precision': 0.8613406795224977, 'recall': 0.8993288590604027, 'f1-score': 0.8799249530956847, 'support': 1043.0} | {'precision': 0.9515141204491323, 'recall': 0.9670893371757925, 'f1-score': 0.9592385090327007, 'support': 17350.0} | {'precision': 0.9372751798561151, 'recall': 0.9037502709733363, 'f1-score': 0.9202074826178127, 'support': 9226.0} | 0.9434 | {'precision': 0.916709993275915, 'recall': 0.9233894890698439, 'f1-score': 0.9197903149153994, 'support': 27619.0} | {'precision': 0.9433523707551661, 'recall': 0.9433723161591658, 'f1-score': 0.9432051881830659, 'support': 27619.0} |
|
85 |
+
| 0.1251 | 14.0 | 574 | 0.2608 | {'precision': 0.8667287977632805, 'recall': 0.8916586768935763, 'f1-score': 0.8790170132325141, 'support': 1043.0} | {'precision': 0.9495959767192179, 'recall': 0.9685878962536023, 'f1-score': 0.9589979170827745, 'support': 17350.0} | {'precision': 0.939315176856142, 'recall': 0.9009321482766096, 'f1-score': 0.9197233748271093, 'support': 9226.0} | 0.9431 | {'precision': 0.9185466504462134, 'recall': 0.9203929071412628, 'f1-score': 0.9192461017141326, 'support': 27619.0} | {'precision': 0.9430323383837321, 'recall': 0.9430826604873457, 'f1-score': 0.9428580492538672, 'support': 27619.0} |
|
86 |
+
|
87 |
+
|
88 |
+
### Framework versions
|
89 |
+
|
90 |
+
- Transformers 4.37.2
|
91 |
+
- Pytorch 2.2.0+cu121
|
92 |
+
- Datasets 2.17.0
|
93 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e15.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9412361055794923
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2837
|
36 |
+
- B: {'precision': 0.8616822429906542, 'recall': 0.8839884947267498, 'f1-score': 0.872692853762423, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9506446299767138, 'recall': 0.9647262247838617, 'f1-score': 0.957633664216037, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9322299261910088, 'recall': 0.9035334923043572, 'f1-score': 0.9176574196389256, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9412
|
40 |
+
- Macro avg: {'precision': 0.9148522663861257, 'recall': 0.9174160706049896, 'f1-score': 0.9159946458724618, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9411337198513156, 'recall': 0.9412361055794923, 'f1-score': 0.9410720907422853, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 15
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2927 | {'precision': 0.8069852941176471, 'recall': 0.42090124640460214, 'f1-score': 0.5532451165721487, 'support': 1043.0} | {'precision': 0.8852390417407678, 'recall': 0.9754466858789625, 'f1-score': 0.9281561917297357, 'support': 17350.0} | {'precision': 0.9349000879728541, 'recall': 0.8063082592672881, 'f1-score': 0.8658557876971424, 'support': 9226.0} | 0.8980 | {'precision': 0.8757081412770896, 'recall': 0.7342187305169509, 'f1-score': 0.7824190319996757, 'support': 27619.0} | {'precision': 0.8988729225389979, 'recall': 0.8980049965603389, 'f1-score': 0.8931869394398603, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.1958 | {'precision': 0.7986171132238548, 'recall': 0.8859060402684564, 'f1-score': 0.84, 'support': 1043.0} | {'precision': 0.9361619307123394, 'recall': 0.9703170028818444, 'f1-score': 0.9529335182407381, 'support': 17350.0} | {'precision': 0.9455124425050124, 'recall': 0.8689572946022112, 'f1-score': 0.905619881389438, 'support': 9226.0} | 0.9333 | {'precision': 0.8934304954804023, 'recall': 0.908393445917504, 'f1-score': 0.8995177998767253, 'support': 27619.0} | {'precision': 0.9340912032116592, 'recall': 0.933270574604439, 'f1-score': 0.932863809956036, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1754 | {'precision': 0.8552631578947368, 'recall': 0.87248322147651, 'f1-score': 0.8637873754152824, 'support': 1043.0} | {'precision': 0.966759166322253, 'recall': 0.9437463976945245, 'f1-score': 0.9551141832181294, 'support': 17350.0} | {'precision': 0.8988355167394468, 'recall': 0.9370257966616085, 'f1-score': 0.9175334323922734, 'support': 9226.0} | 0.9388 | {'precision': 0.9069526136521455, 'recall': 0.9177518052775477, 'f1-score': 0.9121449970085617, 'support': 27619.0} | {'precision': 0.9398590639347346, 'recall': 0.9388102393279989, 'f1-score': 0.9391116535227126, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1844 | {'precision': 0.861003861003861, 'recall': 0.8552253116011506, 'f1-score': 0.8581048581048581, 'support': 1043.0} | {'precision': 0.9428187016481668, 'recall': 0.9693371757925072, 'f1-score': 0.9558940547914061, 'support': 17350.0} | {'precision': 0.9376786735277302, 'recall': 0.8887925428137872, 'f1-score': 0.912581381114017, 'support': 9226.0} | 0.9381 | {'precision': 0.9138337453932527, 'recall': 0.904451676735815, 'f1-score': 0.908860098003427, 'support': 27619.0} | {'precision': 0.9380120548386821, 'recall': 0.9381223071074261, 'f1-score': 0.937732757876541, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2030 | {'precision': 0.8463611859838275, 'recall': 0.9031639501438159, 'f1-score': 0.8738404452690166, 'support': 1043.0} | {'precision': 0.9367116741679169, 'recall': 0.9716426512968299, 'f1-score': 0.9538574702237813, 'support': 17350.0} | {'precision': 0.9452344576330943, 'recall': 0.8717754172989378, 'f1-score': 0.9070200169157033, 'support': 9226.0} | 0.9357 | {'precision': 0.9094357725949461, 'recall': 0.9155273395798611, 'f1-score': 0.9115726441361671, 'support': 27619.0} | {'precision': 0.9361466877844027, 'recall': 0.9356964408559325, 'f1-score': 0.9351898826482664, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1880 | {'precision': 0.8593012275731823, 'recall': 0.87248322147651, 'f1-score': 0.8658420551855375, 'support': 1043.0} | {'precision': 0.9416148372275452, 'recall': 0.9685878962536023, 'f1-score': 0.954910929908799, 'support': 17350.0} | {'precision': 0.9369907035464249, 'recall': 0.8848905267721656, 'f1-score': 0.9101956630804393, 'support': 9226.0} | 0.9370 | {'precision': 0.9126355894490508, 'recall': 0.9086538815007593, 'f1-score': 0.9103162160582586, 'support': 27619.0} | {'precision': 0.9369616871420418, 'recall': 0.9369998913791231, 'f1-score': 0.9366104162010322, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1950 | {'precision': 0.8525345622119815, 'recall': 0.8868648130393096, 'f1-score': 0.8693609022556391, 'support': 1043.0} | {'precision': 0.9470030477480528, 'recall': 0.9670893371757925, 'f1-score': 0.9569408007300102, 'support': 17350.0} | {'precision': 0.9362522686025408, 'recall': 0.8946455668762194, 'f1-score': 0.9149761667220929, 'support': 9226.0} | 0.9399 | {'precision': 0.9119299595208584, 'recall': 0.9161999056971072, 'f1-score': 0.9137592899025807, 'support': 27619.0} | {'precision': 0.9398443048967325, 'recall': 0.9398602411383468, 'f1-score': 0.939615352760648, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.2260 | {'precision': 0.8517495395948435, 'recall': 0.8868648130393096, 'f1-score': 0.868952559887271, 'support': 1043.0} | {'precision': 0.933457985041795, 'recall': 0.978328530259366, 'f1-score': 0.955366691056453, 'support': 17350.0} | {'precision': 0.9556833153671098, 'recall': 0.8648384998916107, 'f1-score': 0.9079943100995733, 'support': 9226.0} | 0.9370 | {'precision': 0.9136302800012494, 'recall': 0.9100106143967621, 'f1-score': 0.9107711870144325, 'support': 27619.0} | {'precision': 0.9377966283301177, 'recall': 0.9369636844201455, 'f1-score': 0.9362788339465783, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2217 | {'precision': 0.8499079189686924, 'recall': 0.8849472674976031, 'f1-score': 0.8670737435415689, 'support': 1043.0} | {'precision': 0.9531535648994516, 'recall': 0.9616138328530259, 'f1-score': 0.9573650083204224, 'support': 17350.0} | {'precision': 0.927455975191051, 'recall': 0.9076522870149577, 'f1-score': 0.9174472747192549, 'support': 9226.0} | 0.9407 | {'precision': 0.910172486353065, 'recall': 0.9180711291218623, 'f1-score': 0.9139620088604153, 'support': 27619.0} | {'precision': 0.9406704492415535, 'recall': 0.9406930011948297, 'f1-score': 0.9406209263707241, 'support': 27619.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.2663 | {'precision': 0.8574091332712023, 'recall': 0.8820709491850431, 'f1-score': 0.8695652173913044, 'support': 1043.0} | {'precision': 0.9361054205193511, 'recall': 0.9744668587896254, 'f1-score': 0.9549010194572307, 'support': 17350.0} | {'precision': 0.9483794932233353, 'recall': 0.8722089746368957, 'f1-score': 0.9087008074078257, 'support': 9226.0} | 0.9368 | {'precision': 0.9139646823379629, 'recall': 0.9095822608705214, 'f1-score': 0.9110556814187869, 'support': 27619.0} | {'precision': 0.937233642655096, 'recall': 0.9368188565842355, 'f1-score': 0.9362454418504176, 'support': 27619.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.2752 | {'precision': 0.8570110701107011, 'recall': 0.8906999041227229, 'f1-score': 0.8735307945463094, 'support': 1043.0} | {'precision': 0.9348246340789838, 'recall': 0.9755043227665706, 'f1-score': 0.954731349598082, 'support': 17350.0} | {'precision': 0.9505338078291815, 'recall': 0.8685237372642532, 'f1-score': 0.9076801087449027, 'support': 9226.0} | 0.9366 | {'precision': 0.9141231706729555, 'recall': 0.9115759880511822, 'f1-score': 0.911980750963098, 'support': 27619.0} | {'precision': 0.9371336709666482, 'recall': 0.9365654078713929, 'f1-score': 0.9359476526130199, 'support': 27619.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.2662 | {'precision': 0.8555657773689053, 'recall': 0.8916586768935763, 'f1-score': 0.8732394366197183, 'support': 1043.0} | {'precision': 0.9461304151624549, 'recall': 0.9667435158501441, 'f1-score': 0.9563259022749302, 'support': 17350.0} | {'precision': 0.9358246251703771, 'recall': 0.8930197268588771, 'f1-score': 0.9139212423738213, 'support': 9226.0} | 0.9393 | {'precision': 0.9125069392339125, 'recall': 0.9171406398675325, 'f1-score': 0.9144955270894899, 'support': 27619.0} | {'precision': 0.9392677432450942, 'recall': 0.9392809297947066, 'f1-score': 0.9390231550383894, 'support': 27619.0} |
|
84 |
+
| 0.1232 | 13.0 | 533 | 0.2681 | {'precision': 0.8646895273401297, 'recall': 0.8945349952061361, 'f1-score': 0.8793590951932139, 'support': 1043.0} | {'precision': 0.9548364966841985, 'recall': 0.9626512968299712, 'f1-score': 0.9587279719878308, 'support': 17350.0} | {'precision': 0.9293766578249337, 'recall': 0.9114459137220897, 'f1-score': 0.9203239575352961, 'support': 9226.0} | 0.9430 | {'precision': 0.9163008939497539, 'recall': 0.9228774019193989, 'f1-score': 0.9194703415721136, 'support': 27619.0} | {'precision': 0.9429274571700438, 'recall': 0.9429740396104132, 'f1-score': 0.9429020124731535, 'support': 27619.0} |
|
85 |
+
| 0.1232 | 14.0 | 574 | 0.2835 | {'precision': 0.8643592142188962, 'recall': 0.8859060402684564, 'f1-score': 0.875, 'support': 1043.0} | {'precision': 0.9461283248045886, 'recall': 0.9697406340057637, 'f1-score': 0.9577889733299177, 'support': 17350.0} | {'precision': 0.9405726018022128, 'recall': 0.8937784522003035, 'f1-score': 0.9165786694825766, 'support': 9226.0} | 0.9412 | {'precision': 0.9170200469418992, 'recall': 0.9164750421581745, 'f1-score': 0.9164558809374981, 'support': 27619.0} | {'precision': 0.9411845439739722, 'recall': 0.9411998986205149, 'f1-score': 0.9408964297013043, 'support': 27619.0} |
|
86 |
+
| 0.1232 | 15.0 | 615 | 0.2837 | {'precision': 0.8616822429906542, 'recall': 0.8839884947267498, 'f1-score': 0.872692853762423, 'support': 1043.0} | {'precision': 0.9506446299767138, 'recall': 0.9647262247838617, 'f1-score': 0.957633664216037, 'support': 17350.0} | {'precision': 0.9322299261910088, 'recall': 0.9035334923043572, 'f1-score': 0.9176574196389256, 'support': 9226.0} | 0.9412 | {'precision': 0.9148522663861257, 'recall': 0.9174160706049896, 'f1-score': 0.9159946458724618, 'support': 27619.0} | {'precision': 0.9411337198513156, 'recall': 0.9412361055794923, 'f1-score': 0.9410720907422853, 'support': 27619.0} |
|
87 |
+
|
88 |
+
|
89 |
+
### Framework versions
|
90 |
+
|
91 |
+
- Transformers 4.37.2
|
92 |
+
- Pytorch 2.2.0+cu121
|
93 |
+
- Datasets 2.17.0
|
94 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e4.md
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
base_model: allenai/longformer-base-4096
|
4 |
tags:
|
5 |
- generated_from_trainer
|
@@ -17,12 +16,12 @@ model-index:
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
-
split:
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,13 +31,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B: {'precision': 0.
|
37 |
-
- I: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Accuracy: 0.
|
40 |
-
- Macro avg: {'precision': 0.
|
41 |
-
- Weighted avg: {'precision': 0.
|
42 |
|
43 |
## Model description
|
44 |
|
@@ -69,10 +68,10 @@ The following hyperparameters were used during training:
|
|
69 |
|
70 |
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
-
| No log | 1.0 | 41 | 0.
|
73 |
-
| No log | 2.0 | 82 | 0.
|
74 |
-
| No log | 3.0 | 123 | 0.
|
75 |
-
| No log | 4.0 | 164 | 0.
|
76 |
|
77 |
|
78 |
### Framework versions
|
|
|
1 |
---
|
|
|
2 |
base_model: allenai/longformer-base-4096
|
3 |
tags:
|
4 |
- generated_from_trainer
|
|
|
16 |
name: essays_su_g
|
17 |
type: essays_su_g
|
18 |
config: spans
|
19 |
+
split: train[80%:100%]
|
20 |
args: spans
|
21 |
metrics:
|
22 |
- name: Accuracy
|
23 |
type: accuracy
|
24 |
+
value: 0.9313516057786306
|
25 |
---
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
31 |
|
32 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
33 |
It achieves the following results on the evaluation set:
|
34 |
+
- Loss: 0.1886
|
35 |
+
- B: {'precision': 0.8005115089514067, 'recall': 0.900287631831256, 'f1-score': 0.8474729241877257, 'support': 1043.0}
|
36 |
+
- I: {'precision': 0.9321724709784411, 'recall': 0.9719308357348703, 'f1-score': 0.9516365688487585, 'support': 17350.0}
|
37 |
+
- O: {'precision': 0.947941598851125, 'recall': 0.8585519184912205, 'f1-score': 0.9010351495848026, 'support': 9226.0}
|
38 |
+
- Accuracy: 0.9314
|
39 |
+
- Macro avg: {'precision': 0.8935418595936575, 'recall': 0.9102567953524489, 'f1-score': 0.9000482142070956, 'support': 27619.0}
|
40 |
+
- Weighted avg: {'precision': 0.9324680497596853, 'recall': 0.9313516057786306, 'f1-score': 0.9307997762237281, 'support': 27619.0}
|
41 |
|
42 |
## Model description
|
43 |
|
|
|
68 |
|
69 |
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
70 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
71 |
+
| No log | 1.0 | 41 | 0.3465 | {'precision': 0.7459016393442623, 'recall': 0.174496644295302, 'f1-score': 0.2828282828282829, 'support': 1043.0} | {'precision': 0.8462454712392674, 'recall': 0.9827665706051874, 'f1-score': 0.90941091762447, 'support': 17350.0} | {'precision': 0.9458898422363686, 'recall': 0.7408411012356384, 'f1-score': 0.8309020179917336, 'support': 9226.0} | 0.8714 | {'precision': 0.8460123176066329, 'recall': 0.6327014387120425, 'f1-score': 0.6743804061481621, 'support': 27619.0} | {'precision': 0.8757418451178571, 'recall': 0.8714290886708426, 'f1-score': 0.8595232027867116, 'support': 27619.0} |
|
72 |
+
| No log | 2.0 | 82 | 0.2059 | {'precision': 0.7637130801687764, 'recall': 0.8676893576222435, 'f1-score': 0.8123877917414721, 'support': 1043.0} | {'precision': 0.9387513394619593, 'recall': 0.9593659942363112, 'f1-score': 0.9489467232975115, 'support': 17350.0} | {'precision': 0.9291049063541308, 'recall': 0.8764361586819857, 'f1-score': 0.9020023425734843, 'support': 9226.0} | 0.9282 | {'precision': 0.8771897753282888, 'recall': 0.9011638368468469, 'f1-score': 0.8877789525374893, 'support': 27619.0} | {'precision': 0.9289188728159687, 'recall': 0.9282016003475868, 'f1-score': 0.9281081765661735, 'support': 27619.0} |
|
73 |
+
| No log | 3.0 | 123 | 0.1926 | {'precision': 0.7828618968386023, 'recall': 0.9022051773729626, 'f1-score': 0.8383073496659242, 'support': 1043.0} | {'precision': 0.9354406344242153, 'recall': 0.9654178674351584, 'f1-score': 0.950192874971636, 'support': 17350.0} | {'precision': 0.9381976266008695, 'recall': 0.8654888358985476, 'f1-score': 0.9003777414444383, 'support': 9226.0} | 0.9296 | {'precision': 0.8855000526212291, 'recall': 0.9110372935688895, 'f1-score': 0.896292655360666, 'support': 27619.0} | {'precision': 0.9305996331758, 'recall': 0.9296498787066875, 'f1-score': 0.9293271294770207, 'support': 27619.0} |
|
74 |
+
| No log | 4.0 | 164 | 0.1886 | {'precision': 0.8005115089514067, 'recall': 0.900287631831256, 'f1-score': 0.8474729241877257, 'support': 1043.0} | {'precision': 0.9321724709784411, 'recall': 0.9719308357348703, 'f1-score': 0.9516365688487585, 'support': 17350.0} | {'precision': 0.947941598851125, 'recall': 0.8585519184912205, 'f1-score': 0.9010351495848026, 'support': 9226.0} | 0.9314 | {'precision': 0.8935418595936575, 'recall': 0.9102567953524489, 'f1-score': 0.9000482142070956, 'support': 27619.0} | {'precision': 0.9324680497596853, 'recall': 0.9313516057786306, 'f1-score': 0.9307997762237281, 'support': 27619.0} |
|
75 |
|
76 |
|
77 |
### Framework versions
|
meta_data/README_s42_e5.md
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
base_model: allenai/longformer-base-4096
|
4 |
tags:
|
5 |
- generated_from_trainer
|
@@ -17,12 +16,12 @@ model-index:
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
-
split:
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,13 +31,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B: {'precision': 0.
|
37 |
-
- I: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Accuracy: 0.
|
40 |
-
- Macro avg: {'precision': 0.
|
41 |
-
- Weighted avg: {'precision': 0.
|
42 |
|
43 |
## Model description
|
44 |
|
@@ -67,13 +66,13 @@ The following hyperparameters were used during training:
|
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
-
| Training Loss | Epoch | Step | Validation Loss | B
|
71 |
-
|
72 |
-
| No log | 1.0 | 41 | 0.
|
73 |
-
| No log | 2.0 | 82 | 0.
|
74 |
-
| No log | 3.0 | 123 | 0.
|
75 |
-
| No log | 4.0 | 164 | 0.
|
76 |
-
| No log | 5.0 | 205 | 0.
|
77 |
|
78 |
|
79 |
### Framework versions
|
|
|
1 |
---
|
|
|
2 |
base_model: allenai/longformer-base-4096
|
3 |
tags:
|
4 |
- generated_from_trainer
|
|
|
16 |
name: essays_su_g
|
17 |
type: essays_su_g
|
18 |
config: spans
|
19 |
+
split: train[80%:100%]
|
20 |
args: spans
|
21 |
metrics:
|
22 |
- name: Accuracy
|
23 |
type: accuracy
|
24 |
+
value: 0.935805061732865
|
25 |
---
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
31 |
|
32 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
33 |
It achieves the following results on the evaluation set:
|
34 |
+
- Loss: 0.1821
|
35 |
+
- B: {'precision': 0.8143972246313964, 'recall': 0.900287631831256, 'f1-score': 0.8551912568306012, 'support': 1043.0}
|
36 |
+
- I: {'precision': 0.9392924896774913, 'recall': 0.9702593659942363, 'f1-score': 0.9545248355636199, 'support': 17350.0}
|
37 |
+
- O: {'precision': 0.944873595505618, 'recall': 0.8750270973336224, 'f1-score': 0.9086100168823861, 'support': 9226.0}
|
38 |
+
- Accuracy: 0.9358
|
39 |
+
- Macro avg: {'precision': 0.8995211032715019, 'recall': 0.9151913650530382, 'f1-score': 0.9061087030922024, 'support': 27619.0}
|
40 |
+
- Weighted avg: {'precision': 0.936440305345228, 'recall': 0.935805061732865, 'f1-score': 0.9354359822462803, 'support': 27619.0}
|
41 |
|
42 |
## Model description
|
43 |
|
|
|
66 |
|
67 |
### Training results
|
68 |
|
69 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
70 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
71 |
+
| No log | 1.0 | 41 | 0.3420 | {'precision': 0.7641196013289037, 'recall': 0.22051773729626079, 'f1-score': 0.3422619047619047, 'support': 1043.0} | {'precision': 0.8498853325356466, 'recall': 0.9825360230547551, 'f1-score': 0.9114093242087253, 'support': 17350.0} | {'precision': 0.9462809917355371, 'recall': 0.7446347279427704, 'f1-score': 0.8334344292126653, 'support': 9226.0} | 0.8743 | {'precision': 0.8534286418666958, 'recall': 0.6492294960979287, 'f1-score': 0.6957018860610984, 'support': 27619.0} | {'precision': 0.8788470144984097, 'recall': 0.8742894384300662, 'f1-score': 0.8638689664942287, 'support': 27619.0} |
|
72 |
+
| No log | 2.0 | 82 | 0.2028 | {'precision': 0.7734241908006815, 'recall': 0.8705656759348035, 'f1-score': 0.8191249436175011, 'support': 1043.0} | {'precision': 0.9413330313154765, 'recall': 0.9580979827089338, 'f1-score': 0.9496415207518066, 'support': 17350.0} | {'precision': 0.9263601183701343, 'recall': 0.8821807934099285, 'f1-score': 0.9037308461025984, 'support': 9226.0} | 0.9294 | {'precision': 0.8803724468287641, 'recall': 0.903614817351222, 'f1-score': 0.8908324368239686, 'support': 27619.0} | {'precision': 0.9299905129226795, 'recall': 0.9294326369528223, 'f1-score': 0.9293764613990178, 'support': 27619.0} |
|
73 |
+
| No log | 3.0 | 123 | 0.2004 | {'precision': 0.7942905121746432, 'recall': 0.9069990412272292, 'f1-score': 0.8469113697403761, 'support': 1043.0} | {'precision': 0.9219560115701577, 'recall': 0.9736599423631124, 'f1-score': 0.9471028508956354, 'support': 17350.0} | {'precision': 0.9505243676742752, 'recall': 0.835031432907002, 'f1-score': 0.8890427557555824, 'support': 9226.0} | 0.9248 | {'precision': 0.8889236304730254, 'recall': 0.9052301388324479, 'f1-score': 0.8943523254638647, 'support': 27619.0} | {'precision': 0.9266779977951141, 'recall': 0.9248343531626778, 'f1-score': 0.9239245260972333, 'support': 27619.0} |
|
74 |
+
| No log | 4.0 | 164 | 0.1732 | {'precision': 0.8319928507596068, 'recall': 0.8926174496644296, 'f1-score': 0.8612395929694727, 'support': 1043.0} | {'precision': 0.9531670965892806, 'recall': 0.9583861671469741, 'f1-score': 0.9557695071130909, 'support': 17350.0} | {'precision': 0.9240198785201547, 'recall': 0.9068935616735313, 'f1-score': 0.9153766205349817, 'support': 9226.0} | 0.9387 | {'precision': 0.9030599419563474, 'recall': 0.9192990594949784, 'f1-score': 0.9107952402058485, 'support': 27619.0} | {'precision': 0.9388545953290572, 'recall': 0.9387016184510663, 'f1-score': 0.9387066347418453, 'support': 27619.0} |
|
75 |
+
| No log | 5.0 | 205 | 0.1821 | {'precision': 0.8143972246313964, 'recall': 0.900287631831256, 'f1-score': 0.8551912568306012, 'support': 1043.0} | {'precision': 0.9392924896774913, 'recall': 0.9702593659942363, 'f1-score': 0.9545248355636199, 'support': 17350.0} | {'precision': 0.944873595505618, 'recall': 0.8750270973336224, 'f1-score': 0.9086100168823861, 'support': 9226.0} | 0.9358 | {'precision': 0.8995211032715019, 'recall': 0.9151913650530382, 'f1-score': 0.9061087030922024, 'support': 27619.0} | {'precision': 0.936440305345228, 'recall': 0.935805061732865, 'f1-score': 0.9354359822462803, 'support': 27619.0} |
|
76 |
|
77 |
|
78 |
### Framework versions
|
meta_data/README_s42_e6.md
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
base_model: allenai/longformer-base-4096
|
4 |
tags:
|
5 |
- generated_from_trainer
|
@@ -17,12 +16,12 @@ model-index:
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
-
split:
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,13 +31,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B: {'precision': 0.
|
37 |
-
- I: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Accuracy: 0.
|
40 |
-
- Macro avg: {'precision': 0.
|
41 |
-
- Weighted avg: {'precision': 0.
|
42 |
|
43 |
## Model description
|
44 |
|
@@ -67,14 +66,14 @@ The following hyperparameters were used during training:
|
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
-
| Training Loss | Epoch | Step | Validation Loss | B
|
71 |
-
|
72 |
-
| No log | 1.0 | 41 | 0.
|
73 |
-
| No log | 2.0 | 82 | 0.
|
74 |
-
| No log | 3.0 | 123 | 0.
|
75 |
-
| No log | 4.0 | 164 | 0.
|
76 |
-
| No log | 5.0 | 205 | 0.
|
77 |
-
| No log | 6.0 | 246 | 0.
|
78 |
|
79 |
|
80 |
### Framework versions
|
|
|
1 |
---
|
|
|
2 |
base_model: allenai/longformer-base-4096
|
3 |
tags:
|
4 |
- generated_from_trainer
|
|
|
16 |
name: essays_su_g
|
17 |
type: essays_su_g
|
18 |
config: spans
|
19 |
+
split: train[80%:100%]
|
20 |
args: spans
|
21 |
metrics:
|
22 |
- name: Accuracy
|
23 |
type: accuracy
|
24 |
+
value: 0.9382671349433361
|
25 |
---
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
31 |
|
32 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
33 |
It achieves the following results on the evaluation set:
|
34 |
+
- Loss: 0.1775
|
35 |
+
- B: {'precision': 0.8277385159010601, 'recall': 0.8983700862895494, 'f1-score': 0.8616091954022989, 'support': 1043.0}
|
36 |
+
- I: {'precision': 0.9442383361439011, 'recall': 0.9681844380403458, 'f1-score': 0.9560614684120662, 'support': 17350.0}
|
37 |
+
- O: {'precision': 0.9404392319190525, 'recall': 0.886516366789508, 'f1-score': 0.9126820286782348, 'support': 9226.0}
|
38 |
+
- Accuracy: 0.9383
|
39 |
+
- Macro avg: {'precision': 0.9041386946546712, 'recall': 0.9176902970398011, 'f1-score': 0.9101175641641999, 'support': 27619.0}
|
40 |
+
- Weighted avg: {'precision': 0.9385697801465175, 'recall': 0.9382671349433361, 'f1-score': 0.9380038837155341, 'support': 27619.0}
|
41 |
|
42 |
## Model description
|
43 |
|
|
|
66 |
|
67 |
### Training results
|
68 |
|
69 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
70 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
71 |
+
| No log | 1.0 | 41 | 0.2767 | {'precision': 0.8252032520325203, 'recall': 0.38926174496644295, 'f1-score': 0.5289902280130293, 'support': 1043.0} | {'precision': 0.8942471288813271, 'recall': 0.9693948126801153, 'f1-score': 0.9303058797499861, 'support': 17350.0} | {'precision': 0.9191008534679649, 'recall': 0.8287448515066117, 'f1-score': 0.8715873468224564, 'support': 9226.0} | 0.9005 | {'precision': 0.8795170781272708, 'recall': 0.7291338030510567, 'f1-score': 0.7769611515284907, 'support': 27619.0} | {'precision': 0.8999420381641764, 'recall': 0.9005032767297875, 'f1-score': 0.8955359963526497, 'support': 27619.0} |
|
72 |
+
| No log | 2.0 | 82 | 0.2284 | {'precision': 0.7671568627450981, 'recall': 0.900287631831256, 'f1-score': 0.8284075871195412, 'support': 1043.0} | {'precision': 0.9168915272531031, 'recall': 0.9792507204610951, 'f1-score': 0.947045707915273, 'support': 17350.0} | {'precision': 0.9646535282898919, 'recall': 0.8223498807717321, 'f1-score': 0.8878357030015798, 'support': 9226.0} | 0.9239 | {'precision': 0.8829006394293644, 'recall': 0.900629411021361, 'f1-score': 0.8877629993454645, 'support': 27619.0} | {'precision': 0.9271916455225395, 'recall': 0.9238567652702849, 'f1-score': 0.9227866447586169, 'support': 27619.0} |
|
73 |
+
| No log | 3.0 | 123 | 0.1770 | {'precision': 0.8351648351648352, 'recall': 0.8744007670182167, 'f1-score': 0.8543325526932084, 'support': 1043.0} | {'precision': 0.9442345644206371, 'recall': 0.9651873198847263, 'f1-score': 0.954595981188542, 'support': 17350.0} | {'precision': 0.9328935395814377, 'recall': 0.8890093214827661, 'f1-score': 0.9104229104229105, 'support': 9226.0} | 0.9363 | {'precision': 0.90409764638897, 'recall': 0.909532469461903, 'f1-score': 0.906450481434887, 'support': 27619.0} | {'precision': 0.9363272534108158, 'recall': 0.9363119591585503, 'f1-score': 0.9360538360419275, 'support': 27619.0} |
|
74 |
+
| No log | 4.0 | 164 | 0.1804 | {'precision': 0.8234265734265734, 'recall': 0.9031639501438159, 'f1-score': 0.8614540466392319, 'support': 1043.0} | {'precision': 0.9435452033162258, 'recall': 0.9642651296829972, 'f1-score': 0.9537926512927226, 'support': 17350.0} | {'precision': 0.9335544373284538, 'recall': 0.8847821374376761, 'f1-score': 0.9085141903171953, 'support': 9226.0} | 0.9354 | {'precision': 0.9001754046904177, 'recall': 0.917403739088163, 'f1-score': 0.9079202960830499, 'support': 27619.0} | {'precision': 0.9356716909523425, 'recall': 0.9354067851841124, 'f1-score': 0.9351805275513196, 'support': 27619.0} |
|
75 |
+
| No log | 5.0 | 205 | 0.1774 | {'precision': 0.8283450704225352, 'recall': 0.9022051773729626, 'f1-score': 0.8636989444699403, 'support': 1043.0} | {'precision': 0.9497974784642592, 'recall': 0.9595965417867435, 'f1-score': 0.9546718655924767, 'support': 17350.0} | {'precision': 0.9269600178691088, 'recall': 0.8996314762627358, 'f1-score': 0.9130913091309132, 'support': 9226.0} | 0.9374 | {'precision': 0.9017008555853011, 'recall': 0.9204777318074807, 'f1-score': 0.9104873730644435, 'support': 27619.0} | {'precision': 0.9375822182072485, 'recall': 0.9373981679278758, 'f1-score': 0.9373465833358712, 'support': 27619.0} |
|
76 |
+
| No log | 6.0 | 246 | 0.1775 | {'precision': 0.8277385159010601, 'recall': 0.8983700862895494, 'f1-score': 0.8616091954022989, 'support': 1043.0} | {'precision': 0.9442383361439011, 'recall': 0.9681844380403458, 'f1-score': 0.9560614684120662, 'support': 17350.0} | {'precision': 0.9404392319190525, 'recall': 0.886516366789508, 'f1-score': 0.9126820286782348, 'support': 9226.0} | 0.9383 | {'precision': 0.9041386946546712, 'recall': 0.9176902970398011, 'f1-score': 0.9101175641641999, 'support': 27619.0} | {'precision': 0.9385697801465175, 'recall': 0.9382671349433361, 'f1-score': 0.9380038837155341, 'support': 27619.0} |
|
77 |
|
78 |
|
79 |
### Framework versions
|
meta_data/README_s42_e7.md
CHANGED
@@ -17,12 +17,12 @@ model-index:
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
-
split:
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,13 +32,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B: {'precision': 0.
|
37 |
-
- I: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Accuracy: 0.
|
40 |
-
- Macro avg: {'precision': 0.
|
41 |
-
- Weighted avg: {'precision': 0.
|
42 |
|
43 |
## Model description
|
44 |
|
@@ -67,15 +67,15 @@ The following hyperparameters were used during training:
|
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
-
| Training Loss | Epoch | Step | Validation Loss | B
|
71 |
-
|
72 |
-
| No log | 1.0 | 41 | 0.
|
73 |
-
| No log | 2.0 | 82 | 0.
|
74 |
-
| No log | 3.0 | 123 | 0.
|
75 |
-
| No log | 4.0 | 164 | 0.
|
76 |
-
| No log | 5.0 | 205 | 0.
|
77 |
-
| No log | 6.0 | 246 | 0.
|
78 |
-
| No log | 7.0 | 287 | 0.
|
79 |
|
80 |
|
81 |
### Framework versions
|
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
+
value: 0.9382309279843586
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.1841
|
36 |
+
- B: {'precision': 0.8358744394618834, 'recall': 0.8935762224352828, 'f1-score': 0.8637627432808155, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9433073515392811, 'recall': 0.9695677233429395, 'f1-score': 0.9562572833470712, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9409526006227655, 'recall': 0.8843485800997182, 'f1-score': 0.9117729228362295, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9382
|
40 |
+
- Macro avg: {'precision': 0.9067114638746433, 'recall': 0.9158308419593135, 'f1-score': 0.9105976498213719, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9384636765600096, 'recall': 0.9382309279843586, 'f1-score': 0.9379045364930165, 'support': 27619.0}
|
42 |
|
43 |
## Model description
|
44 |
|
|
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2970 | {'precision': 0.8171557562076749, 'recall': 0.34707574304889743, 'f1-score': 0.48721399730820997, 'support': 1043.0} | {'precision': 0.8802934137966912, 'recall': 0.9752737752161383, 'f1-score': 0.9253527288636114, 'support': 17350.0} | {'precision': 0.9304752325873774, 'recall': 0.8021894645566876, 'f1-score': 0.861583236321304, 'support': 9226.0} | 0.8937 | {'precision': 0.8759748008639145, 'recall': 0.7081796609405745, 'f1-score': 0.7580499874977084, 'support': 27619.0} | {'precision': 0.8946720981551954, 'recall': 0.893732575400992, 'f1-score': 0.8875050140583102, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2228 | {'precision': 0.7610474631751227, 'recall': 0.8916586768935763, 'f1-score': 0.8211920529801324, 'support': 1043.0} | {'precision': 0.9182955222264335, 'recall': 0.9775216138328531, 'f1-score': 0.946983444540607, 'support': 17350.0} | {'precision': 0.9614026236125126, 'recall': 0.8261435074788641, 'f1-score': 0.8886557071237029, 'support': 9226.0} | 0.9237 | {'precision': 0.8802485363380229, 'recall': 0.898441266068431, 'f1-score': 0.8856104015481474, 'support': 27619.0} | {'precision': 0.9267569578974372, 'recall': 0.9237119374343749, 'f1-score': 0.9227489636830115, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1807 | {'precision': 0.845437616387337, 'recall': 0.8705656759348035, 'f1-score': 0.8578176665092113, 'support': 1043.0} | {'precision': 0.9587634878973461, 'recall': 0.9474351585014409, 'f1-score': 0.9530656616901, 'support': 17350.0} | {'precision': 0.9035106382978724, 'recall': 0.9205506178192066, 'f1-score': 0.9119510361859765, 'support': 9226.0} | 0.9356 | {'precision': 0.9025705808608517, 'recall': 0.9128504840851503, 'f1-score': 0.907611454795096, 'support': 27619.0} | {'precision': 0.9360269053132667, 'recall': 0.9355516130200224, 'f1-score': 0.9357345782375959, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.2177 | {'precision': 0.8223028105167725, 'recall': 0.8696069031639502, 'f1-score': 0.8452935694315005, 'support': 1043.0} | {'precision': 0.9182645433864154, 'recall': 0.9771181556195966, 'f1-score': 0.9467776164414164, 'support': 17350.0} | {'precision': 0.9526943133846536, 'recall': 0.8316713635378279, 'f1-score': 0.8880787037037038, 'support': 9226.0} | 0.9245 | {'precision': 0.8977538890959472, 'recall': 0.8927988074404581, 'f1-score': 0.8933832965255403, 'support': 27619.0} | {'precision': 0.9261417645247878, 'recall': 0.9244722835729027, 'f1-score': 0.9233370852871574, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.1864 | {'precision': 0.8298059964726632, 'recall': 0.9022051773729626, 'f1-score': 0.8644924207625172, 'support': 1043.0} | {'precision': 0.9426901899089786, 'recall': 0.9670317002881844, 'f1-score': 0.9547058154091271, 'support': 17350.0} | {'precision': 0.9384137216530448, 'recall': 0.8835898547582918, 'f1-score': 0.9101769664489477, 'support': 9226.0} | 0.9367 | {'precision': 0.9036366360115622, 'recall': 0.9176089108064795, 'f1-score': 0.909791734206864, 'support': 27619.0} | {'precision': 0.9369987126692769, 'recall': 0.9367102357073029, 'f1-score': 0.9364243522452534, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1768 | {'precision': 0.8413417951042611, 'recall': 0.8897411313518696, 'f1-score': 0.8648648648648648, 'support': 1043.0} | {'precision': 0.9434724091520862, 'recall': 0.9696829971181556, 'f1-score': 0.9563981581490535, 'support': 17350.0} | {'precision': 0.9409258406264395, 'recall': 0.885649252113592, 'f1-score': 0.9124511446119487, 'support': 9226.0} | 0.9386 | {'precision': 0.908580014960929, 'recall': 0.9150244601945391, 'f1-score': 0.9112380558752889, 'support': 27619.0} | {'precision': 0.9387648936131638, 'recall': 0.9385929975741337, 'f1-score': 0.938261209968861, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1841 | {'precision': 0.8358744394618834, 'recall': 0.8935762224352828, 'f1-score': 0.8637627432808155, 'support': 1043.0} | {'precision': 0.9433073515392811, 'recall': 0.9695677233429395, 'f1-score': 0.9562572833470712, 'support': 17350.0} | {'precision': 0.9409526006227655, 'recall': 0.8843485800997182, 'f1-score': 0.9117729228362295, 'support': 9226.0} | 0.9382 | {'precision': 0.9067114638746433, 'recall': 0.9158308419593135, 'f1-score': 0.9105976498213719, 'support': 27619.0} | {'precision': 0.9384636765600096, 'recall': 0.9382309279843586, 'f1-score': 0.9379045364930165, 'support': 27619.0} |
|
79 |
|
80 |
|
81 |
### Framework versions
|
meta_data/README_s42_e8.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9362395452405953
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.1974
|
36 |
+
- B: {'precision': 0.8404351767905711, 'recall': 0.8887823585810163, 'f1-score': 0.863932898415657, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9420745397395599, 'recall': 0.9673775216138328, 'f1-score': 0.954558380253654, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9364367816091954, 'recall': 0.8830479080858443, 'f1-score': 0.9089590538882071, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9362
|
40 |
+
- Macro avg: {'precision': 0.9063154993797754, 'recall': 0.9130692627602311, 'f1-score': 0.9091501108525061, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9363529780585962, 'recall': 0.9362395452405953, 'f1-score': 0.9359037670307043, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 8
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2955 | {'precision': 0.7986798679867987, 'recall': 0.46404602109300097, 'f1-score': 0.5870224378411159, 'support': 1043.0} | {'precision': 0.8854450261780105, 'recall': 0.9747550432276657, 'f1-score': 0.9279561042524005, 'support': 17350.0} | {'precision': 0.9346644761784405, 'recall': 0.8016475178842402, 'f1-score': 0.8630608553591224, 'support': 9226.0} | 0.8976 | {'precision': 0.8729297901144165, 'recall': 0.7468161940683024, 'f1-score': 0.7926797991508797, 'support': 27619.0} | {'precision': 0.8986099700829504, 'recall': 0.8976429269705637, 'f1-score': 0.893403174010308, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2031 | {'precision': 0.784197111299915, 'recall': 0.8849472674976031, 'f1-score': 0.8315315315315316, 'support': 1043.0} | {'precision': 0.9307149161518093, 'recall': 0.9724495677233429, 'f1-score': 0.9511246406223575, 'support': 17350.0} | {'precision': 0.9504450324753428, 'recall': 0.8564925211359202, 'f1-score': 0.9010262257696694, 'support': 9226.0} | 0.9304 | {'precision': 0.8884523533090224, 'recall': 0.9046297854522888, 'f1-score': 0.8945607993078529, 'support': 27619.0} | {'precision': 0.9317725932125427, 'recall': 0.9304102248452153, 'f1-score': 0.9298731982018269, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1754 | {'precision': 0.8527204502814258, 'recall': 0.8715244487056567, 'f1-score': 0.8620199146514935, 'support': 1043.0} | {'precision': 0.9616262064931267, 'recall': 0.947492795389049, 'f1-score': 0.9545071853679779, 'support': 17350.0} | {'precision': 0.9036794248255445, 'recall': 0.9264036418816388, 'f1-score': 0.9149004495825305, 'support': 9226.0} | 0.9376 | {'precision': 0.906008693866699, 'recall': 0.9151402953254482, 'f1-score': 0.9104758498673339, 'support': 27619.0} | {'precision': 0.9381566488916958, 'recall': 0.9375792027227633, 'f1-score': 0.9377840611522629, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.2248 | {'precision': 0.8219800181653043, 'recall': 0.8676893576222435, 'f1-score': 0.8442164179104478, 'support': 1043.0} | {'precision': 0.9191395059726502, 'recall': 0.9801152737752161, 'f1-score': 0.9486485732615547, 'support': 17350.0} | {'precision': 0.9589622053137083, 'recall': 0.8332972035551701, 'f1-score': 0.8917241779272748, 'support': 9226.0} | 0.9268 | {'precision': 0.9000272431505542, 'recall': 0.8937006116508766, 'f1-score': 0.8948630563664257, 'support': 27619.0} | {'precision': 0.9287729785218931, 'recall': 0.9268257359064412, 'f1-score': 0.9256894795439954, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.1931 | {'precision': 0.848987108655617, 'recall': 0.8839884947267498, 'f1-score': 0.8661343353687178, 'support': 1043.0} | {'precision': 0.9373124374791597, 'recall': 0.9721037463976945, 'f1-score': 0.9543911272068809, 'support': 17350.0} | {'precision': 0.9444899871179295, 'recall': 0.8741599826577064, 'f1-score': 0.9079650999155643, 'support': 9226.0} | 0.9361 | {'precision': 0.910263177750902, 'recall': 0.9100840745940503, 'f1-score': 0.909496854163721, 'support': 27619.0} | {'precision': 0.9363745597502171, 'recall': 0.9360585104457076, 'f1-score': 0.935549809212859, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1742 | {'precision': 0.8382222222222222, 'recall': 0.9041227229146692, 'f1-score': 0.8699261992619925, 'support': 1043.0} | {'precision': 0.9481431159420289, 'recall': 0.9653025936599423, 'f1-score': 0.956645913063346, 'support': 17350.0} | {'precision': 0.9353340883352208, 'recall': 0.8951875135486668, 'f1-score': 0.9148205582631811, 'support': 9226.0} | 0.9396 | {'precision': 0.9072331421664908, 'recall': 0.9215376100410927, 'f1-score': 0.9137975568628399, 'support': 27619.0} | {'precision': 0.9397132821011885, 'recall': 0.9395705854665267, 'f1-score': 0.9393994745651696, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1985 | {'precision': 0.8421052631578947, 'recall': 0.8897411313518696, 'f1-score': 0.8652680652680652, 'support': 1043.0} | {'precision': 0.9399821009061416, 'recall': 0.9685878962536023, 'f1-score': 0.9540706256386964, 'support': 17350.0} | {'precision': 0.9385345526102559, 'recall': 0.8788207240407544, 'f1-score': 0.9076966134900645, 'support': 9226.0} | 0.9356 | {'precision': 0.9068739722247642, 'recall': 0.9123832505487423, 'f1-score': 0.9090117681322755, 'support': 27619.0} | {'precision': 0.935802347028403, 'recall': 0.9356240269379775, 'f1-score': 0.9352260727385245, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.1974 | {'precision': 0.8404351767905711, 'recall': 0.8887823585810163, 'f1-score': 0.863932898415657, 'support': 1043.0} | {'precision': 0.9420745397395599, 'recall': 0.9673775216138328, 'f1-score': 0.954558380253654, 'support': 17350.0} | {'precision': 0.9364367816091954, 'recall': 0.8830479080858443, 'f1-score': 0.9089590538882071, 'support': 9226.0} | 0.9362 | {'precision': 0.9063154993797754, 'recall': 0.9130692627602311, 'f1-score': 0.9091501108525061, 'support': 27619.0} | {'precision': 0.9363529780585962, 'recall': 0.9362395452405953, 'f1-score': 0.9359037670307043, 'support': 27619.0} |
|
80 |
+
|
81 |
+
|
82 |
+
### Framework versions
|
83 |
+
|
84 |
+
- Transformers 4.37.2
|
85 |
+
- Pytorch 2.2.0+cu121
|
86 |
+
- Datasets 2.17.0
|
87 |
+
- Tokenizers 0.15.2
|
meta_data/README_s42_e9.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[80%:100%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9389550671639089
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.2009
|
36 |
+
- B: {'precision': 0.8471337579617835, 'recall': 0.8926174496644296, 'f1-score': 0.8692810457516339, 'support': 1043.0}
|
37 |
+
- I: {'precision': 0.9459794744558475, 'recall': 0.9669164265129683, 'f1-score': 0.9563333713373617, 'support': 17350.0}
|
38 |
+
- O: {'precision': 0.9362622353744594, 'recall': 0.8916106655105137, 'f1-score': 0.9133910726182546, 'support': 9226.0}
|
39 |
+
- Accuracy: 0.9390
|
40 |
+
- Macro avg: {'precision': 0.9097918225973635, 'recall': 0.9170481805626371, 'f1-score': 0.9130018299024169, 'support': 27619.0}
|
41 |
+
- Weighted avg: {'precision': 0.9390006797830427, 'recall': 0.9389550671639089, 'f1-score': 0.9387012621528006, 'support': 27619.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 9
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.2937 | {'precision': 0.8082191780821918, 'recall': 0.3959731543624161, 'f1-score': 0.5315315315315315, 'support': 1043.0} | {'precision': 0.8851326600031398, 'recall': 0.9748703170028818, 'f1-score': 0.9278367481280343, 'support': 17350.0} | {'precision': 0.932616577072134, 'recall': 0.8085844352915673, 'f1-score': 0.8661828737300435, 'support': 9226.0} | 0.8975 | {'precision': 0.8753228050524885, 'recall': 0.7264759688856217, 'f1-score': 0.7751837177965365, 'support': 27619.0} | {'precision': 0.8980898944155006, 'recall': 0.8974618921756762, 'f1-score': 0.8922755407669418, 'support': 27619.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2221 | {'precision': 0.7776852622814321, 'recall': 0.8954937679769894, 'f1-score': 0.8324420677361853, 'support': 1043.0} | {'precision': 0.9200997398091935, 'recall': 0.978328530259366, 'f1-score': 0.948321135258953, 'support': 17350.0} | {'precision': 0.9626097867001254, 'recall': 0.8315629742033384, 'f1-score': 0.8923005350081413, 'support': 9226.0} | 0.9262 | {'precision': 0.8867982629302503, 'recall': 0.9017950908132312, 'f1-score': 0.8910212460010932, 'support': 27619.0} | {'precision': 0.9289219054398928, 'recall': 0.926174010644846, 'f1-score': 0.9252316705665227, 'support': 27619.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.1732 | {'precision': 0.8459409594095941, 'recall': 0.8791946308724832, 'f1-score': 0.8622472966619651, 'support': 1043.0} | {'precision': 0.963898493817031, 'recall': 0.9479538904899135, 'f1-score': 0.9558597041815592, 'support': 17350.0} | {'precision': 0.9060388513513513, 'recall': 0.9301972685887708, 'f1-score': 0.9179591400149748, 'support': 9226.0} | 0.9394 | {'precision': 0.9052927681926587, 'recall': 0.9191152633170558, 'f1-score': 0.9120220469528331, 'support': 27619.0} | {'precision': 0.9401162145970984, 'recall': 0.9394257576306166, 'f1-score': 0.9396640292460493, 'support': 27619.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.1893 | {'precision': 0.8392523364485981, 'recall': 0.8609779482262704, 'f1-score': 0.8499763369616659, 'support': 1043.0} | {'precision': 0.9343029364596582, 'recall': 0.9737752161383285, 'f1-score': 0.9536307961504812, 'support': 17350.0} | {'precision': 0.946491849751949, 'recall': 0.8685237372642532, 'f1-score': 0.9058331449242596, 'support': 9226.0} | 0.9344 | {'precision': 0.9066823742200686, 'recall': 0.9010923005429508, 'f1-score': 0.9031467593454688, 'support': 27619.0} | {'precision': 0.9347851095370013, 'recall': 0.9343567833737645, 'f1-score': 0.9337498181589879, 'support': 27619.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.1928 | {'precision': 0.8462946020128088, 'recall': 0.8868648130393096, 'f1-score': 0.8661048689138576, 'support': 1043.0} | {'precision': 0.9407601426660722, 'recall': 0.9729682997118155, 'f1-score': 0.9565931886439621, 'support': 17350.0} | {'precision': 0.9475646702400373, 'recall': 0.8814220680685021, 'f1-score': 0.91329739442947, 'support': 9226.0} | 0.9391 | {'precision': 0.9115398049729727, 'recall': 0.9137517269398758, 'f1-score': 0.9119984839957632, 'support': 27619.0} | {'precision': 0.939465780542029, 'recall': 0.9391361019587965, 'f1-score': 0.9387132395183093, 'support': 27619.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.1784 | {'precision': 0.8283712784588442, 'recall': 0.9069990412272292, 'f1-score': 0.8659038901601832, 'support': 1043.0} | {'precision': 0.9433644229688729, 'recall': 0.9677233429394813, 'f1-score': 0.9553886423125071, 'support': 17350.0} | {'precision': 0.9398548219840995, 'recall': 0.8841318014307392, 'f1-score': 0.9111421390672997, 'support': 9226.0} | 0.9375 | {'precision': 0.9038635078039389, 'recall': 0.9196180618658166, 'f1-score': 0.9108115571799966, 'support': 27619.0} | {'precision': 0.9378494720868902, 'recall': 0.9375067888048083, 'f1-score': 0.9372290117887677, 'support': 27619.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.1897 | {'precision': 0.8537037037037037, 'recall': 0.8839884947267498, 'f1-score': 0.8685821950070655, 'support': 1043.0} | {'precision': 0.9477176070314715, 'recall': 0.96328530259366, 'f1-score': 0.9554380448763755, 'support': 17350.0} | {'precision': 0.9293575920934412, 'recall': 0.8969217429004986, 'f1-score': 0.9128516271373415, 'support': 9226.0} | 0.9381 | {'precision': 0.9102596342762054, 'recall': 0.9147318467403028, 'f1-score': 0.9122906223402608, 'support': 27619.0} | {'precision': 0.9380342007173714, 'recall': 0.9381223071074261, 'f1-score': 0.9379322357785074, 'support': 27619.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.1994 | {'precision': 0.8458029197080292, 'recall': 0.8887823585810163, 'f1-score': 0.8667601683029453, 'support': 1043.0} | {'precision': 0.941661062542031, 'recall': 0.9684726224783862, 'f1-score': 0.9548786725009946, 'support': 17350.0} | {'precision': 0.938241732918539, 'recall': 0.8826143507478864, 'f1-score': 0.9095783300753979, 'support': 9226.0} | 0.9368 | {'precision': 0.9085685717228663, 'recall': 0.9132897772690963, 'f1-score': 0.9104057236264459, 'support': 27619.0} | {'precision': 0.9368988778835639, 'recall': 0.9367826496252579, 'f1-score': 0.9364186066370198, 'support': 27619.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.2009 | {'precision': 0.8471337579617835, 'recall': 0.8926174496644296, 'f1-score': 0.8692810457516339, 'support': 1043.0} | {'precision': 0.9459794744558475, 'recall': 0.9669164265129683, 'f1-score': 0.9563333713373617, 'support': 17350.0} | {'precision': 0.9362622353744594, 'recall': 0.8916106655105137, 'f1-score': 0.9133910726182546, 'support': 9226.0} | 0.9390 | {'precision': 0.9097918225973635, 'recall': 0.9170481805626371, 'f1-score': 0.9130018299024169, 'support': 27619.0} | {'precision': 0.9390006797830427, 'recall': 0.9389550671639089, 'f1-score': 0.9387012621528006, 'support': 27619.0} |
|
81 |
+
|
82 |
+
|
83 |
+
### Framework versions
|
84 |
+
|
85 |
+
- Transformers 4.37.2
|
86 |
+
- Pytorch 2.2.0+cu121
|
87 |
+
- Datasets 2.17.0
|
88 |
+
- Tokenizers 0.15.2
|
meta_data/meta_s42_e10_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8308702791461412, "recall": 0.8932038834951457, "f1-score": 0.8609102509570395, "support": 1133.0}, "I": {"precision": 0.9321623895666807, "recall": 0.9668903070964927, "f1-score": 0.9492088141583443, "support": 18333.0}, "O": {"precision": 0.9339560439560439, "recall": 0.8612687474665586, "f1-score": 0.8961408688317166, "support": 9868.0}, "accuracy": 0.928512988341174, "macro avg": {"precision": 0.8989962375562887, "recall": 0.9071209793527323, "f1-score": 0.9020866446490335, "support": 29334.0}, "weighted avg": {"precision": 0.9288534586471937, "recall": 0.928512988341174, "f1-score": 0.9279462261515863, "support": 29334.0}}
|
meta_data/meta_s42_e10_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.83, "recall": 0.9159592529711376, "f1-score": 0.8708635996771591, "support": 1178.0}, "I": {"precision": 0.9410241806037384, "recall": 0.961638181914387, "f1-score": 0.951219512195122, "support": 18899.0}, "O": {"precision": 0.9303193695562008, "recall": 0.881335952848723, "f1-score": 0.9051654560129136, "support": 10180.0}, "accuracy": 0.93284198697822, "macro avg": {"precision": 0.900447850053313, "recall": 0.9196444625780825, "f1-score": 0.9090828559617316, "support": 30257.0}, "weighted avg": {"precision": 0.9331000155769632, "recall": 0.93284198697822, "f1-score": 0.9325960678060207, "support": 30257.0}}
|
meta_data/meta_s42_e10_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8678414096916299, "recall": 0.9127413127413128, "f1-score": 0.8897252540459164, "support": 1295.0}, "I": {"precision": 0.9573654953345245, "recall": 0.9613256915026165, "f1-score": 0.9593415064780046, "support": 20065.0}, "O": {"precision": 0.90925459128556, "recall": 0.8931729748850371, "f1-score": 0.9011420413990007, "support": 8481.0}, "accuracy": 0.9398478603263966, "macro avg": {"precision": 0.9114871654372382, "recall": 0.9224133263763221, "f1-score": 0.9167362673076406, "support": 29841.0}, "weighted avg": {"precision": 0.9398070265115355, "recall": 0.9398478603263966, "f1-score": 0.9397797387679886, "support": 29841.0}}
|
meta_data/meta_s42_e10_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8634564643799473, "recall": 0.9090277777777778, "f1-score": 0.8856562922868743, "support": 1440.0}, "I": {"precision": 0.9539470666483189, "recall": 0.9634039004956687, "f1-score": 0.9586521618880797, "support": 21587.0}, "O": {"precision": 0.9242855739958755, "recall": 0.8986918743435501, "f1-score": 0.9113090627420605, "support": 10473.0}, "accuracy": 0.9408358208955224, "macro avg": {"precision": 0.9138963683413804, "recall": 0.9237078508723323, "f1-score": 0.9185391723056715, "support": 33500.0}, "weighted avg": {"precision": 0.9407843418777071, "recall": 0.9408358208955224, "f1-score": 0.9407137042886172, "support": 33500.0}}
|
meta_data/meta_s42_e10_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8636788048552755, "recall": 0.8868648130393096, "f1-score": 0.8751182592242194, "support": 1043.0}, "I": {"precision": 0.948943661971831, "recall": 0.9630547550432277, "f1-score": 0.9559471365638768, "support": 17350.0}, "O": {"precision": 0.9289709172259508, "recall": 0.9001734229351832, "f1-score": 0.9143454805680943, "support": 9226.0}, "accuracy": 0.939172308917774, "macro avg": {"precision": 0.9138644613510191, "recall": 0.9166976636725735, "f1-score": 0.9151369587853968, "support": 27619.0}, "weighted avg": {"precision": 0.9390519284189125, "recall": 0.939172308917774, "f1-score": 0.9389978843359775, "support": 27619.0}}
|
meta_data/meta_s42_e11_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8348547717842324, "recall": 0.8879082082965578, "f1-score": 0.8605645851154834, "support": 1133.0}, "I": {"precision": 0.9347297154342536, "recall": 0.9639447989963454, "f1-score": 0.9491124895942425, "support": 18333.0}, "O": {"precision": 0.9285481947305649, "recall": 0.8678556951763275, "f1-score": 0.8971766801110472, "support": 9868.0}, "accuracy": 0.9286834390127497, "macro avg": {"precision": 0.8993775606496838, "recall": 0.9065695674897437, "f1-score": 0.9022845849402578, "support": 29334.0}, "weighted avg": {"precision": 0.9287926609084654, "recall": 0.9286834390127497, "f1-score": 0.9282211231336641, "support": 29334.0}}
|
meta_data/meta_s42_e11_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8334618350038551, "recall": 0.9176570458404074, "f1-score": 0.8735353535353536, "support": 1178.0}, "I": {"precision": 0.9419608248488294, "recall": 0.9643896502460447, "f1-score": 0.9530432963815102, "support": 18899.0}, "O": {"precision": 0.9350743939236291, "recall": 0.8828094302554027, "f1-score": 0.9081905916830882, "support": 10180.0}, "accuracy": 0.9351224510030737, "macro avg": {"precision": 0.9034990179254377, "recall": 0.9216187087806182, "f1-score": 0.911589747199984, "support": 30257.0}, "weighted avg": {"precision": 0.9354196715006481, "recall": 0.9351224510030737, "f1-score": 0.9348570621050549, "support": 30257.0}}
|
meta_data/meta_s42_e11_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8724681170292573, "recall": 0.8980694980694981, "f1-score": 0.8850837138508371, "support": 1295.0}, "I": {"precision": 0.9613723126381354, "recall": 0.9538499875404934, "f1-score": 0.9575963775548495, "support": 20065.0}, "O": {"precision": 0.892093023255814, "recall": 0.9046103053885155, "f1-score": 0.8983080615889, "support": 8481.0}, "accuracy": 0.937435072551188, "macro avg": {"precision": 0.9086444843077356, "recall": 0.9188432636661691, "f1-score": 0.9136627176648622, "support": 29841.0}, "weighted avg": {"precision": 0.9378245566458775, "recall": 0.937435072551188, "f1-score": 0.9375994569689471, "support": 29841.0}}
|
meta_data/meta_s42_e11_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8663563829787234, "recall": 0.9048611111111111, "f1-score": 0.8851902173913044, "support": 1440.0}, "I": {"precision": 0.9561740095233693, "recall": 0.9581229443646639, "f1-score": 0.9571474848442778, "support": 21587.0}, "O": {"precision": 0.914616497829233, "recall": 0.9051847608135205, "f1-score": 0.9098761877339476, "support": 10473.0}, "accuracy": 0.9392835820895522, "macro avg": {"precision": 0.9123822967771086, "recall": 0.9227229387630985, "f1-score": 0.9174046299898433, "support": 33500.0}, "weighted avg": {"precision": 0.9393211975174894, "recall": 0.9392835820895522, "f1-score": 0.9392761188810309, "support": 33500.0}}
|
meta_data/meta_s42_e11_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8579285059578369, "recall": 0.8974113135186961, "f1-score": 0.8772258669165884, "support": 1043.0}, "I": {"precision": 0.9460510739049551, "recall": 0.9672622478386167, "f1-score": 0.9565390863233492, "support": 17350.0}, "O": {"precision": 0.9369666628740471, "recall": 0.8925861695209192, "f1-score": 0.9142381348875936, "support": 9226.0}, "accuracy": 0.9396792063434593, "macro avg": {"precision": 0.9136487475789464, "recall": 0.9190865769594107, "f1-score": 0.9160010293758437, "support": 27619.0}, "weighted avg": {"precision": 0.9396886199949656, "recall": 0.9396792063434593, "f1-score": 0.9394134747592979, "support": 27619.0}}
|
meta_data/meta_s42_e12_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.834717607973422, "recall": 0.8870255957634599, "f1-score": 0.8600770218228497, "support": 1133.0}, "I": {"precision": 0.9348962458419136, "recall": 0.9657993781705122, "f1-score": 0.950096587250483, "support": 18333.0}, "O": {"precision": 0.9317810901969318, "recall": 0.8678556951763275, "f1-score": 0.898683036885461, "support": 9868.0}, "accuracy": 0.929808413445149, "macro avg": {"precision": 0.9004649813374224, "recall": 0.9068935563700999, "f1-score": 0.902952215319598, "support": 29334.0}, "weighted avg": {"precision": 0.9299789910314655, "recall": 0.929808413445149, "f1-score": 0.9293240678998475, "support": 29334.0}}
|
meta_data/meta_s42_e12_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8391881342701015, "recall": 0.9125636672325976, "f1-score": 0.8743391622610818, "support": 1178.0}, "I": {"precision": 0.9472886053443498, "recall": 0.958516323615006, "f1-score": 0.9528693914049761, "support": 18899.0}, "O": {"precision": 0.9241855272505836, "recall": 0.8944990176817289, "f1-score": 0.9090999850247093, "support": 10180.0}, "accuracy": 0.9351885514095911, "macro avg": {"precision": 0.9035540889550117, "recall": 0.9218596695097775, "f1-score": 0.9121028462302557, "support": 30257.0}, "weighted avg": {"precision": 0.9353068593047554, "recall": 0.9351885514095911, "f1-score": 0.9350856994698, "support": 30257.0}}
|
meta_data/meta_s42_e12_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8656387665198237, "recall": 0.9104247104247104, "f1-score": 0.8874670681219419, "support": 1295.0}, "I": {"precision": 0.9585091071517197, "recall": 0.9625218041365562, "f1-score": 0.9605112647336748, "support": 20065.0}, "O": {"precision": 0.9114045618247298, "recall": 0.8951774554887395, "f1-score": 0.9032181309856641, "support": 8481.0}, "accuracy": 0.9411212760966455, "macro avg": {"precision": 0.9118508118320912, "recall": 0.9227079900166687, "f1-score": 0.9170654879470935, "support": 29841.0}, "weighted avg": {"precision": 0.9410914354906994, "recall": 0.9411212760966455, "f1-score": 0.9410583207328345, "support": 29841.0}}
|
meta_data/meta_s42_e12_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8683510638297872, "recall": 0.9069444444444444, "f1-score": 0.8872282608695652, "support": 1440.0}, "I": {"precision": 0.9570773718572024, "recall": 0.957520730069023, "f1-score": 0.9572989996294924, "support": 21587.0}, "O": {"precision": 0.9130685642850274, "recall": 0.9066170151818963, "f1-score": 0.9098313530088157, "support": 10473.0}, "accuracy": 0.9394328358208955, "macro avg": {"precision": 0.9128323333240056, "recall": 0.9236940632317879, "f1-score": 0.9181195378359578, "support": 33500.0}, "weighted avg": {"precision": 0.9395051293120422, "recall": 0.9394328358208955, "f1-score": 0.939447342110906, "support": 33500.0}}
|
meta_data/meta_s42_e12_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8633828996282528, "recall": 0.8906999041227229, "f1-score": 0.8768286927796131, "support": 1043.0}, "I": {"precision": 0.9488209014307527, "recall": 0.9670317002881844, "f1-score": 0.9578397510918277, "support": 17350.0}, "O": {"precision": 0.9363431151241535, "recall": 0.8991979189247779, "f1-score": 0.917394669910428, "support": 9226.0}, "accuracy": 0.9414895542923349, "macro avg": {"precision": 0.9161823053943863, "recall": 0.9189765077785618, "f1-score": 0.9173543712606228, "support": 27619.0}, "weighted avg": {"precision": 0.941426285682728, "recall": 0.9414895542923349, "f1-score": 0.9412699675080908, "support": 27619.0}}
|
meta_data/meta_s42_e13_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8406040268456376, "recall": 0.884377758164166, "f1-score": 0.8619354838709679, "support": 1133.0}, "I": {"precision": 0.9360021208907742, "recall": 0.9629084165166639, "f1-score": 0.94926464657328, "support": 18333.0}, "O": {"precision": 0.9264167205343676, "recall": 0.8714025131738954, "f1-score": 0.8980678851174935, "support": 9868.0}, "accuracy": 0.9290925206245313, "macro avg": {"precision": 0.9010076227569265, "recall": 0.9062295626182418, "f1-score": 0.9030893385205804, "support": 29334.0}, "weighted avg": {"precision": 0.9290929107158863, "recall": 0.9290925206245313, "f1-score": 0.9286689697686362, "support": 29334.0}}
|
meta_data/meta_s42_e13_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8426435877261998, "recall": 0.9091680814940577, "f1-score": 0.8746427113107391, "support": 1178.0}, "I": {"precision": 0.9454498044328553, "recall": 0.9592571035504524, "f1-score": 0.9523034091506014, "support": 18899.0}, "O": {"precision": 0.9252879421057996, "recall": 0.8917485265225933, "f1-score": 0.9082086939122604, "support": 10180.0}, "accuracy": 0.9345936477509337, "macro avg": {"precision": 0.9044604447549517, "recall": 0.9200579038557012, "f1-score": 0.911718271457867, "support": 30257.0}, "weighted avg": {"precision": 0.9346637555261605, "recall": 0.9345936477509337, "f1-score": 0.9344441202858207, "support": 30257.0}}
|
meta_data/meta_s42_e13_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8680913780397936, "recall": 0.9096525096525097, "f1-score": 0.8883861236802413, "support": 1295.0}, "I": {"precision": 0.9606048547552726, "recall": 0.9624719661101421, "f1-score": 0.9615375040454082, "support": 20065.0}, "O": {"precision": 0.9119331742243437, "recall": 0.9010729866760995, "f1-score": 0.9064705533479627, "support": 8481.0}, "accuracy": 0.942729801280118, "macro avg": {"precision": 0.9135431356731366, "recall": 0.9243991541462505, "f1-score": 0.9187980603578708, "support": 29841.0}, "weighted avg": {"precision": 0.9427572801120182, "recall": 0.942729801280118, "f1-score": 0.942712603859827, "support": 29841.0}}
|
meta_data/meta_s42_e13_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8624094799210007, "recall": 0.9097222222222222, "f1-score": 0.8854342683338965, "support": 1440.0}, "I": {"precision": 0.955426624303541, "recall": 0.9611803400194562, "f1-score": 0.9582948457417328, "support": 21587.0}, "O": {"precision": 0.9203039750584567, "recall": 0.9019383175785353, "f1-score": 0.9110285962289627, "support": 10473.0}, "accuracy": 0.9404477611940298, "macro avg": {"precision": 0.9127133597609994, "recall": 0.9242802932734046, "f1-score": 0.9182525701015307, "support": 33500.0}, "weighted avg": {"precision": 0.9404479916631043, "recall": 0.9404477611940298, "f1-score": 0.9403862289472695, "support": 33500.0}}
|
meta_data/meta_s42_e13_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8656716417910447, "recall": 0.8897411313518696, "f1-score": 0.8775413711583925, "support": 1043.0}, "I": {"precision": 0.9471240942028986, "recall": 0.9642651296829972, "f1-score": 0.9556177528988404, "support": 17350.0}, "O": {"precision": 0.9312169312169312, "recall": 0.8965965748970302, "f1-score": 0.9135788834281297, "support": 9226.0}, "accuracy": 0.9388464462869763, "macro avg": {"precision": 0.9146708890702916, "recall": 0.9168676119772989, "f1-score": 0.9155793358284542, "support": 27619.0}, "weighted avg": {"precision": 0.9387344206602614, "recall": 0.9388464462869763, "f1-score": 0.9386263963728234, "support": 27619.0}}
|
meta_data/meta_s42_e14_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.85, "recall": 0.8852603706972639, "f1-score": 0.8672719412019023, "support": 1133.0}, "I": {"precision": 0.936269320347237, "recall": 0.9648175421371298, "f1-score": 0.950329079919409, "support": 18333.0}, "O": {"precision": 0.9291729648024185, "recall": 0.872111876773409, "f1-score": 0.8997386304234187, "support": 9868.0}, "accuracy": 0.9305583964000819, "macro avg": {"precision": 0.9051474283832185, "recall": 0.9073965965359342, "f1-score": 0.9057798838482434, "support": 29334.0}, "weighted avg": {"precision": 0.9305500193153392, "recall": 0.9305583964000819, "f1-score": 0.9301023705107581, "support": 29334.0}}
|
meta_data/meta_s42_e14_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8456692913385827, "recall": 0.9117147707979627, "f1-score": 0.877450980392157, "support": 1178.0}, "I": {"precision": 0.9425555038037572, "recall": 0.9637017831631304, "f1-score": 0.9530113547171786, "support": 18899.0}, "O": {"precision": 0.9324296357615894, "recall": 0.8851669941060903, "f1-score": 0.9081838339044548, "support": 10180.0}, "accuracy": 0.9352546518161087, "macro avg": {"precision": 0.9068848103013099, "recall": 0.9201945160223944, "f1-score": 0.9128820563379302, "support": 30257.0}, "weighted avg": {"precision": 0.9353765602550496, "recall": 0.9352546518161087, "f1-score": 0.93498728482167, "support": 30257.0}}
|
meta_data/meta_s42_e14_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.866861741038771, "recall": 0.915057915057915, "f1-score": 0.8903080390683696, "support": 1295.0}, "I": {"precision": 0.9592089611419509, "recall": 0.9645153251931223, "f1-score": 0.9618548246812951, "support": 20065.0}, "O": {"precision": 0.9166064111834177, "recall": 0.8968282042212004, "f1-score": 0.906609452291555, "support": 8481.0}, "accuracy": 0.943131932575986, "macro avg": {"precision": 0.9142257044547132, "recall": 0.9254671481574125, "f1-score": 0.91959077201374, "support": 29841.0}, "weighted avg": {"precision": 0.9430934865857382, "recall": 0.943131932575986, "f1-score": 0.943048849995255, "support": 29841.0}}
|
meta_data/meta_s42_e14_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8621372031662269, "recall": 0.9076388888888889, "f1-score": 0.8843031123139377, "support": 1440.0}, "I": {"precision": 0.9552574762276632, "recall": 0.9633112521424931, "f1-score": 0.9592674600977951, "support": 21587.0}, "O": {"precision": 0.9242290748898678, "recall": 0.9014608994557434, "f1-score": 0.9127030162412993, "support": 10473.0}, "accuracy": 0.9415820895522388, "macro avg": {"precision": 0.9138745847612526, "recall": 0.9241370134957085, "f1-score": 0.9187578628843441, "support": 33500.0}, "weighted avg": {"precision": 0.9415543824838065, "recall": 0.9415820895522388, "f1-score": 0.9414878158793523, "support": 33500.0}}
|
meta_data/meta_s42_e14_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8667287977632805, "recall": 0.8916586768935763, "f1-score": 0.8790170132325141, "support": 1043.0}, "I": {"precision": 0.9495959767192179, "recall": 0.9685878962536023, "f1-score": 0.9589979170827745, "support": 17350.0}, "O": {"precision": 0.939315176856142, "recall": 0.9009321482766096, "f1-score": 0.9197233748271093, "support": 9226.0}, "accuracy": 0.9430826604873457, "macro avg": {"precision": 0.9185466504462134, "recall": 0.9203929071412628, "f1-score": 0.9192461017141326, "support": 27619.0}, "weighted avg": {"precision": 0.9430323383837321, "recall": 0.9430826604873457, "f1-score": 0.9428580492538672, "support": 27619.0}}
|
meta_data/meta_s42_e15_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8387635756056808, "recall": 0.8861429832303619, "f1-score": 0.8618025751072962, "support": 1133.0}, "I": {"precision": 0.9379758292072619, "recall": 0.9609992908961981, "f1-score": 0.9493479900851384, "support": 18333.0}, "O": {"precision": 0.9232413940560188, "recall": 0.8751520064856101, "f1-score": 0.8985537405056706, "support": 9868.0}, "accuracy": 0.9292288811617918, "macro avg": {"precision": 0.8999935996229872, "recall": 0.9074314268707234, "f1-score": 0.9032347685660351, "support": 29334.0}, "weighted avg": {"precision": 0.9291871577201459, "recall": 0.9292288811617918, "f1-score": 0.9288793663031761, "support": 29334.0}}
|
meta_data/meta_s42_e15_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8361934477379095, "recall": 0.9100169779286927, "f1-score": 0.8715447154471545, "support": 1178.0}, "I": {"precision": 0.9427443237907206, "recall": 0.9601037091909624, "f1-score": 0.9513448330100142, "support": 18899.0}, "O": {"precision": 0.9266036184210527, "recall": 0.8854616895874263, "f1-score": 0.9055656017681334, "support": 10180.0}, "accuracy": 0.9330402881977724, "macro avg": {"precision": 0.9018471299832275, "recall": 0.9185274589023605, "f1-score": 0.9094850500751006, "support": 30257.0}, "weighted avg": {"precision": 0.9331654060971809, "recall": 0.9330402881977724, "f1-score": 0.9328354926084081, "support": 30257.0}}
|
meta_data/meta_s42_e15_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8699485672299779, "recall": 0.9142857142857143, "f1-score": 0.891566265060241, "support": 1295.0}, "I": {"precision": 0.9596853977047883, "recall": 0.9669075504610017, "f1-score": 0.9632829373650107, "support": 20065.0}, "O": {"precision": 0.9218296224588577, "recall": 0.8982431317061668, "f1-score": 0.9098835473275605, "support": 8481.0}, "accuracy": 0.9451090781140042, "macro avg": {"precision": 0.9171545291312079, "recall": 0.9264787988176275, "f1-score": 0.9215775832509374, "support": 29841.0}, "weighted avg": {"precision": 0.9450322686097308, "recall": 0.9451090781140042, "f1-score": 0.9449942299643777, "support": 29841.0}}
|
meta_data/meta_s42_e15_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8691588785046729, "recall": 0.9041666666666667, "f1-score": 0.8863172226004085, "support": 1440.0}, "I": {"precision": 0.9548375252432532, "recall": 0.963728169731783, "f1-score": 0.9592622478386166, "support": 21587.0}, "O": {"precision": 0.9238300372038378, "recall": 0.9009834813329514, "f1-score": 0.9122637405133658, "support": 10473.0}, "accuracy": 0.9415522388059702, "macro avg": {"precision": 0.9159421469839213, "recall": 0.9229594392438004, "f1-score": 0.9192810703174636, "support": 33500.0}, "weighted avg": {"precision": 0.9414608484211531, "recall": 0.9415522388059702, "f1-score": 0.9414337044487546, "support": 33500.0}}
|
meta_data/meta_s42_e15_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8616822429906542, "recall": 0.8839884947267498, "f1-score": 0.872692853762423, "support": 1043.0}, "I": {"precision": 0.9506446299767138, "recall": 0.9647262247838617, "f1-score": 0.957633664216037, "support": 17350.0}, "O": {"precision": 0.9322299261910088, "recall": 0.9035334923043572, "f1-score": 0.9176574196389256, "support": 9226.0}, "accuracy": 0.9412361055794923, "macro avg": {"precision": 0.9148522663861257, "recall": 0.9174160706049896, "f1-score": 0.9159946458724618, "support": 27619.0}, "weighted avg": {"precision": 0.9411337198513156, "recall": 0.9412361055794923, "f1-score": 0.9410720907422853, "support": 27619.0}}
|
meta_data/meta_s42_e16_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8111111111111111, "recall": 0.9020300088261254, "f1-score": 0.854157960718763, "support": 1133.0}, "I": {"precision": 0.9180539091893006, "recall": 0.9716358479245077, "f1-score": 0.9440852236591055, "support": 18333.0}, "O": {"precision": 0.9426825049013955, "recall": 0.8283340089177138, "f1-score": 0.8818167107179459, "support": 9868.0}, "accuracy": 0.9207404377173246, "macro avg": {"precision": 0.8906158417339357, "recall": 0.9006666218894489, "f1-score": 0.8933532983652714, "support": 29334.0}, "weighted avg": {"precision": 0.9222084326864153, "recall": 0.9207404377173246, "f1-score": 0.9196646443104053, "support": 29334.0}}
|
meta_data/meta_s42_e4_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.7972865123703112, "recall": 0.881729920564872, "f1-score": 0.8373847443419951, "support": 1133.0}, "I": {"precision": 0.9204350314825415, "recall": 0.9648175421371298, "f1-score": 0.9421038615179761, "support": 18333.0}, "O": {"precision": 0.9300541516245487, "recall": 0.8354276449128496, "f1-score": 0.880204996796925, "support": 9868.0}, "accuracy": 0.9180814072407445, "macro avg": {"precision": 0.8825918984924671, "recall": 0.8939917025382838, "f1-score": 0.8865645342189654, "support": 29334.0}, "weighted avg": {"precision": 0.9189144139536388, "recall": 0.9180814072407445, "f1-score": 0.9172363099795661, "support": 29334.0}}
|
meta_data/meta_s42_e4_cvi1.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.7851361295069904, "recall": 0.9057724957555179, "f1-score": 0.8411509657075286, "support": 1178.0}, "I": {"precision": 0.9438143521664141, "recall": 0.9555002910206889, "f1-score": 0.9496213714766512, "support": 18899.0}, "O": {"precision": 0.9237071172555044, "recall": 0.8860510805500982, "f1-score": 0.90448734018551, "support": 10180.0}, "accuracy": 0.9301979707175199, "macro avg": {"precision": 0.8842191996429696, "recall": 0.9157746224421017, "f1-score": 0.8984198924565633, "support": 30257.0}, "weighted avg": {"precision": 0.9308714101138028, "recall": 0.9301979707175199, "f1-score": 0.9302128849598172, "support": 30257.0}}
|
meta_data/meta_s42_e4_cvi2.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8251214434420542, "recall": 0.9181467181467181, "f1-score": 0.8691520467836259, "support": 1295.0}, "I": {"precision": 0.9484454939000394, "recall": 0.9608771492648891, "f1-score": 0.9546208501473029, "support": 20065.0}, "O": {"precision": 0.9119177403369673, "recall": 0.8679401014031364, "f1-score": 0.88938560985924, "support": 8481.0}, "accuracy": 0.9326094970007708, "macro avg": {"precision": 0.8951615592263535, "recall": 0.9156546562715812, "f1-score": 0.9043861689300563, "support": 29841.0}, "weighted avg": {"precision": 0.932712223456304, "recall": 0.9326094970007708, "f1-score": 0.9323715229384619, "support": 29841.0}}
|
meta_data/meta_s42_e4_cvi3.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8184693232131562, "recall": 0.8986111111111111, "f1-score": 0.8566699768288646, "support": 1440.0}, "I": {"precision": 0.9491798746774788, "recall": 0.9543243618844675, "f1-score": 0.9517451664318217, "support": 21587.0}, "O": {"precision": 0.9104258443465492, "recall": 0.8879977083930106, "f1-score": 0.8990719257540604, "support": 10473.0}, "accuracy": 0.9311940298507463, "macro avg": {"precision": 0.892691680745728, "recall": 0.9136443937961966, "f1-score": 0.9024956896715822, "support": 33500.0}, "weighted avg": {"precision": 0.9314457208337638, "recall": 0.9311940298507463, "f1-score": 0.9311912821737187, "support": 33500.0}}
|
meta_data/meta_s42_e4_cvi4.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8005115089514067, "recall": 0.900287631831256, "f1-score": 0.8474729241877257, "support": 1043.0}, "I": {"precision": 0.9321724709784411, "recall": 0.9719308357348703, "f1-score": 0.9516365688487585, "support": 17350.0}, "O": {"precision": 0.947941598851125, "recall": 0.8585519184912205, "f1-score": 0.9010351495848026, "support": 9226.0}, "accuracy": 0.9313516057786306, "macro avg": {"precision": 0.8935418595936575, "recall": 0.9102567953524489, "f1-score": 0.9000482142070956, "support": 27619.0}, "weighted avg": {"precision": 0.9324680497596853, "recall": 0.9313516057786306, "f1-score": 0.9307997762237281, "support": 27619.0}}
|
meta_data/meta_s42_e5_cvi0.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"B": {"precision": 0.8073836276083467, "recall": 0.8879082082965578, "f1-score": 0.8457335014712064, "support": 1133.0}, "I": {"precision": 0.9263819753733299, "recall": 0.9643811705667376, "f1-score": 0.9449997327489444, "support": 18333.0}, "O": {"precision": 0.9293568810396534, "recall": 0.847892176732874, "f1-score": 0.8867574585342589, "support": 9868.0}, "accuracy": 0.9222404036271903, "macro avg": {"precision": 0.8877074946737767, "recall": 0.9000605185320564, "f1-score": 0.8924968975848032, "support": 29334.0}, "weighted avg": {"precision": 0.9227865312162955, "recall": 0.9222404036271903, "f1-score": 0.9215728764733532, "support": 29334.0}}
|