almaghrabima commited on
Commit
86f4cc1
1 Parent(s): 7fc84c3

Model save

Browse files
Files changed (2) hide show
  1. README.md +30 -41
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -10,18 +11,6 @@ metrics:
10
  model-index:
11
  - name: ner_column_bert-base-NER
12
  results: []
13
- language:
14
- - en
15
- widget:
16
- - >-
17
- india 0S0308Z8 trudeau 3000 Ravensburger Hamnoy, Lofoten of gold
18
- bestseller 620463000001
19
- - >-
20
- other china lc waikiki mağazacilik hi̇zmetleri̇ ti̇c aş 630140000000 hilti
21
- 6204699090_BD 55L Toaster Oven with Double Glass
22
- - >-
23
- 611020000001 italy Apparel other games 9W1964Z8 debenhams guangzhou hec
24
- fashion leather co ltd
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,11 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.1872
35
- - Precision: 0.7623
36
- - Recall: 0.7753
37
- - F1: 0.7688
38
- - Accuracy: 0.9023
39
 
40
  ## Model description
41
 
@@ -66,31 +55,31 @@ The following hyperparameters were used during training:
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
68
  |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
69
- | No log | 1.0 | 702 | 0.6427 | 0.3025 | 0.2180 | 0.2534 | 0.7415 |
70
- | 0.9329 | 2.0 | 1404 | 0.4771 | 0.4343 | 0.3587 | 0.3929 | 0.7955 |
71
- | 0.546 | 3.0 | 2106 | 0.3983 | 0.5157 | 0.4530 | 0.4823 | 0.8242 |
72
- | 0.546 | 4.0 | 2808 | 0.3748 | 0.5089 | 0.4758 | 0.4918 | 0.8305 |
73
- | 0.4339 | 5.0 | 3510 | 0.2947 | 0.6362 | 0.6146 | 0.6252 | 0.8656 |
74
- | 0.3658 | 6.0 | 4212 | 0.2818 | 0.6421 | 0.6231 | 0.6325 | 0.8664 |
75
- | 0.3658 | 7.0 | 4914 | 0.2459 | 0.7108 | 0.6983 | 0.7045 | 0.8834 |
76
- | 0.3221 | 8.0 | 5616 | 0.2665 | 0.6586 | 0.6404 | 0.6494 | 0.8701 |
77
- | 0.2914 | 9.0 | 6318 | 0.2449 | 0.6880 | 0.6768 | 0.6823 | 0.8793 |
78
- | 0.2657 | 10.0 | 7020 | 0.2411 | 0.7014 | 0.6862 | 0.6937 | 0.8824 |
79
- | 0.2657 | 11.0 | 7722 | 0.2179 | 0.7261 | 0.7228 | 0.7244 | 0.8902 |
80
- | 0.2453 | 12.0 | 8424 | 0.2301 | 0.6922 | 0.6919 | 0.6920 | 0.8858 |
81
- | 0.2295 | 13.0 | 9126 | 0.2352 | 0.6768 | 0.6836 | 0.6802 | 0.8832 |
82
- | 0.2295 | 14.0 | 9828 | 0.2020 | 0.7545 | 0.7499 | 0.7522 | 0.8970 |
83
- | 0.2155 | 15.0 | 10530 | 0.2012 | 0.7449 | 0.7508 | 0.7478 | 0.8974 |
84
- | 0.2064 | 16.0 | 11232 | 0.2036 | 0.7282 | 0.7402 | 0.7341 | 0.8960 |
85
- | 0.2064 | 17.0 | 11934 | 0.1976 | 0.7390 | 0.7496 | 0.7443 | 0.8974 |
86
- | 0.1978 | 18.0 | 12636 | 0.1859 | 0.7688 | 0.7828 | 0.7757 | 0.9040 |
87
- | 0.1895 | 19.0 | 13338 | 0.1917 | 0.7574 | 0.7691 | 0.7632 | 0.9014 |
88
- | 0.186 | 20.0 | 14040 | 0.1872 | 0.7623 | 0.7753 | 0.7688 | 0.9023 |
89
 
90
 
91
  ### Framework versions
92
 
93
- - Transformers 4.30.2
94
- - Pytorch 1.13.1+cu116
95
- - Datasets 2.13.2
96
- - Tokenizers 0.13.3
 
1
  ---
2
  license: mit
3
+ base_model: dslim/bert-base-NER
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
11
  model-index:
12
  - name: ner_column_bert-base-NER
13
  results: []
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.1855
24
+ - Precision: 0.7651
25
+ - Recall: 0.7786
26
+ - F1: 0.7718
27
+ - Accuracy: 0.9026
28
 
29
  ## Model description
30
 
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 1.0 | 702 | 0.7382 | 0.2576 | 0.1887 | 0.2178 | 0.7127 |
59
+ | 0.9356 | 2.0 | 1404 | 0.4405 | 0.5139 | 0.4331 | 0.4700 | 0.8157 |
60
+ | 0.5445 | 3.0 | 2106 | 0.3608 | 0.5712 | 0.5143 | 0.5413 | 0.8404 |
61
+ | 0.5445 | 4.0 | 2808 | 0.3226 | 0.6188 | 0.5840 | 0.6009 | 0.8550 |
62
+ | 0.4316 | 5.0 | 3510 | 0.2757 | 0.6788 | 0.6569 | 0.6676 | 0.8728 |
63
+ | 0.3605 | 6.0 | 4212 | 0.2828 | 0.6584 | 0.6346 | 0.6463 | 0.8697 |
64
+ | 0.3605 | 7.0 | 4914 | 0.2456 | 0.7108 | 0.6926 | 0.7015 | 0.8820 |
65
+ | 0.3153 | 8.0 | 5616 | 0.2385 | 0.7055 | 0.6986 | 0.7021 | 0.8855 |
66
+ | 0.282 | 9.0 | 6318 | 0.2345 | 0.7044 | 0.6961 | 0.7002 | 0.8853 |
67
+ | 0.2587 | 10.0 | 7020 | 0.2313 | 0.7081 | 0.7049 | 0.7065 | 0.8862 |
68
+ | 0.2587 | 11.0 | 7722 | 0.2026 | 0.7734 | 0.7537 | 0.7634 | 0.8968 |
69
+ | 0.239 | 12.0 | 8424 | 0.1980 | 0.7651 | 0.7687 | 0.7669 | 0.8991 |
70
+ | 0.2241 | 13.0 | 9126 | 0.2091 | 0.7368 | 0.7423 | 0.7395 | 0.8936 |
71
+ | 0.2241 | 14.0 | 9828 | 0.1954 | 0.7693 | 0.7684 | 0.7689 | 0.8987 |
72
+ | 0.2124 | 15.0 | 10530 | 0.1916 | 0.7668 | 0.7749 | 0.7708 | 0.9008 |
73
+ | 0.2025 | 16.0 | 11232 | 0.1841 | 0.7699 | 0.7794 | 0.7746 | 0.9024 |
74
+ | 0.2025 | 17.0 | 11934 | 0.1938 | 0.7527 | 0.7626 | 0.7576 | 0.8992 |
75
+ | 0.193 | 18.0 | 12636 | 0.1849 | 0.7705 | 0.7841 | 0.7772 | 0.9040 |
76
+ | 0.1877 | 19.0 | 13338 | 0.1927 | 0.7510 | 0.7649 | 0.7579 | 0.9005 |
77
+ | 0.1821 | 20.0 | 14040 | 0.1855 | 0.7651 | 0.7786 | 0.7718 | 0.9026 |
78
 
79
 
80
  ### Framework versions
81
 
82
+ - Transformers 4.33.2
83
+ - Pytorch 2.0.1+cu117
84
+ - Datasets 2.14.5
85
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f6d1202cf9f363c6ad3019fb510826b598a08334230ad861f13620991d08b27
3
  size 430967977
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7ab94fb2abfc6fc1cbb4a21b9246df56c92a5adf94be72639ac12dc9b5c69ba
3
  size 430967977