tomaarsen HF staff commited on
Commit
4d20fc5
1 Parent(s): 1ceb026

Upload model

Browse files
Files changed (5) hide show
  1. README.md +343 -59
  2. config.json +3 -4
  3. pytorch_model.bin +2 -2
  4. tokenizer.json +2 -2
  5. tokenizer_config.json +3 -1
README.md CHANGED
@@ -1,5 +1,7 @@
1
-
2
  ---
 
 
 
3
  license: cc-by-sa-4.0
4
  library_name: span-marker
5
  tags:
@@ -7,71 +9,231 @@ tags:
7
  - token-classification
8
  - ner
9
  - named-entity-recognition
10
- pipeline_tag: token-classification
11
- widget:
12
- - text: "Amelia Earthart voló su Lockheed Vega 5B monomotor a través del Océano Atlántico hasta París ."
13
- example_title: "Spanish"
14
- - text: "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris ."
15
- example_title: "English"
16
- - text: "Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B à travers l'ocean Atlantique jusqu'à Paris ."
17
- example_title: "French"
18
- - text: "Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B über den Atlantik nach Paris ."
19
- example_title: "German"
20
- - text: "Амелия Эртхарт перелетела на своем одномоторном самолете Lockheed Vega 5B через Атлантический океан в Париж ."
21
- example_title: "Russian"
22
- - text: "Amelia Earthart vloog met haar één-motorige Lockheed Vega 5B over de Atlantische Oceaan naar Parijs ."
23
- example_title: "Dutch"
24
- - text: "Amelia Earthart przeleciała swoim jednosilnikowym samolotem Lockheed Vega 5B przez Ocean Atlantycki do Paryża ."
25
- example_title: "Polish"
26
- - text: "Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafið til Parísar ."
27
- example_title: "Icelandic"
28
- - text: "Η Amelia Earthart πέταξε το μονοκινητήριο Lockheed Vega 5B της πέρα ​​από τον Ατλαντικό Ωκεανό στο Παρίσι ."
29
- example_title: "Greek"
30
- model-index:
31
- - name: SpanMarker w. roberta-base on finegrained, supervised FewNERD by Tom Aarsen
32
- results:
33
- - task:
34
- type: token-classification
35
- name: Named Entity Recognition
36
- dataset:
37
- type: DFKI-SLT/few-nerd
38
- name: finegrained, supervised FewNERD
39
- config: supervised
40
- split: test
41
- revision: 2e3e727c63604fbfa2ff4cc5055359c84fe5ef2c
42
- metrics:
43
- - type: f1
44
- value: 0.6860
45
- name: F1
46
- - type: precision
47
- value: 0.6847
48
- name: Precision
49
- - type: recall
50
- value: 0.6873
51
- name: Recall
52
  datasets:
53
- - DFKI-SLT/few-nerd
54
- language:
55
- - multilingual
56
  metrics:
57
- - f1
58
- - recall
59
- - precision
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ---
61
 
62
- # SpanMarker for Named Entity Recognition
63
 
64
- This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. In particular, this SpanMarker model uses [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the underlying encoder.
65
 
66
- ## Usage
67
 
68
- To use this model for inference, first install the `span_marker` library:
 
 
 
 
 
 
 
69
 
70
- ```bash
71
- pip install span_marker
72
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
- You can then run inference with this model like so:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ```python
77
  from span_marker import SpanMarkerModel
@@ -79,7 +241,129 @@ from span_marker import SpanMarkerModel
79
  # Download from the 🤗 Hub
80
  model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super")
81
  # Run inference
82
- entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris .")
83
  ```
84
 
85
- See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - multilingual
5
  license: cc-by-sa-4.0
6
  library_name: span-marker
7
  tags:
 
9
  - token-classification
10
  - ner
11
  - named-entity-recognition
12
+ - generated_from_span_marker_trainer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  datasets:
14
+ - DFKI-SLT/few-nerd
 
 
15
  metrics:
16
+ - precision
17
+ - recall
18
+ - f1
19
+ widget:
20
+ - text: The WPC led the international peace movement in the decade after the Second
21
+ World War, but its failure to speak out against the Soviet suppression of the
22
+ 1956 Hungarian uprising and the resumption of Soviet nuclear tests in 1961 marginalised
23
+ it, and in the 1960s it was eclipsed by the newer, non-aligned peace organizations
24
+ like the Campaign for Nuclear Disarmament.
25
+ - text: Most of the Steven Seagal movie "Under Siege "(co-starring Tommy Lee Jones)
26
+ was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and
27
+ open to the public.
28
+ - text: 'The Central African CFA franc (French: "franc CFA "or simply "franc ", ISO
29
+ 4217 code: XAF) is the currency of six independent states in Central Africa: Cameroon,
30
+ Central African Republic, Chad, Republic of the Congo, Equatorial Guinea and Gabon.'
31
+ - text: Brenner conducted post-doctoral research at Brandeis University with Gregory
32
+ Petsko and then took his first academic position at Thomas Jefferson University
33
+ in 1996, moving to Dartmouth Medical School in 2003, where he served as Associate
34
+ Director for Basic Sciences at Norris Cotton Cancer Center.
35
+ - text: On Friday, October 27, 2017, the Senate of Spain (Senado) voted 214 to 47
36
+ to invoke Article 155 of the Spanish Constitution over Catalonia after the Catalan
37
+ Parliament declared the independence.
38
+ pipeline_tag: token-classification
39
+ co2_eq_emissions:
40
+ emissions: 452.84872035276965
41
+ source: codecarbon
42
+ training_type: fine-tuning
43
+ on_cloud: false
44
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
45
+ ram_total_size: 31.777088165283203
46
+ hours_used: 3.118
47
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
48
+ base_model: xlm-roberta-base
49
+ model-index:
50
+ - name: SpanMarker with xlm-roberta-base on FewNERD
51
+ results:
52
+ - task:
53
+ type: token-classification
54
+ name: Named Entity Recognition
55
+ dataset:
56
+ name: FewNERD
57
+ type: DFKI-SLT/few-nerd
58
+ split: test
59
+ metrics:
60
+ - type: f1
61
+ value: 0.6884821229658107
62
+ name: F1
63
+ - type: precision
64
+ value: 0.6890426017339362
65
+ name: Precision
66
+ - type: recall
67
+ value: 0.6879225552622042
68
+ name: Recall
69
  ---
70
 
71
+ # SpanMarker with xlm-roberta-base on FewNERD
72
 
73
+ This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the underlying encoder.
74
 
75
+ ## Model Details
76
 
77
+ ### Model Description
78
+ - **Model Type:** SpanMarker
79
+ - **Encoder:** [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
80
+ - **Maximum Sequence Length:** 256 tokens
81
+ - **Maximum Entity Length:** 8 words
82
+ - **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
83
+ - **Languages:** en, multilingual
84
+ - **License:** cc-by-sa-4.0
85
 
86
+ ### Model Sources
87
+
88
+ - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
89
+ - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
90
+
91
+ ### Model Labels
92
+ | Label | Examples |
93
+ |:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
94
+ | art-broadcastprogram | "The Gale Storm Show : Oh , Susanna", "Corazones", "Street Cents" |
95
+ | art-film | "L'Atlantide", "Shawshank Redemption", "Bosch" |
96
+ | art-music | "Hollywood Studio Symphony", "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Champion Lover" |
97
+ | art-other | "Venus de Milo", "Aphrodite of Milos", "The Today Show" |
98
+ | art-painting | "Cofiwch Dryweryn", "Production/Reproduction", "Touit" |
99
+ | art-writtenart | "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi" |
100
+ | building-airport | "Newark Liberty International Airport", "Luton Airport", "Sheremetyevo International Airport" |
101
+ | building-hospital | "Hokkaido University Hospital", "Yeungnam University Hospital", "Memorial Sloan-Kettering Cancer Center" |
102
+ | building-hotel | "Radisson Blu Sea Plaza Hotel", "The Standard Hotel", "Flamingo Hotel" |
103
+ | building-library | "British Library", "Berlin State Library", "Bayerische Staatsbibliothek" |
104
+ | building-other | "Communiplex", "Henry Ford Museum", "Alpha Recording Studios" |
105
+ | building-restaurant | "Fatburger", "Carnegie Deli", "Trumbull" |
106
+ | building-sportsfacility | "Boston Garden", "Glenn Warner Soccer Facility", "Sports Center" |
107
+ | building-theater | "Pittsburgh Civic Light Opera", "National Paris Opera", "Sanders Theatre" |
108
+ | event-attack/battle/war/militaryconflict | "Jurist", "Easter Offensive", "Vietnam War" |
109
+ | event-disaster | "1693 Sicily earthquake", "1990s North Korean famine", "the 1912 North Mount Lyell Disaster" |
110
+ | event-election | "March 1898 elections", "Elections to the European Parliament", "1982 Mitcham and Morden by-election" |
111
+ | event-other | "Eastwood Scoring Stage", "Union for a Popular Movement", "Masaryk Democratic Movement" |
112
+ | event-protest | "Russian Revolution", "French Revolution", "Iranian Constitutional Revolution" |
113
+ | event-sportsevent | "World Cup", "Stanley Cup", "National Champions" |
114
+ | location-GPE | "Mediterranean Basin", "Croatian", "the Republic of Croatia" |
115
+ | location-bodiesofwater | "Norfolk coast", "Atatürk Dam Lake", "Arthur Kill" |
116
+ | location-island | "Laccadives", "Staten Island", "new Samsat district" |
117
+ | location-mountain | "Ruweisat Ridge", "Miteirya Ridge", "Salamander Glacier" |
118
+ | location-other | "Victoria line", "Northern City Line", "Cartuther" |
119
+ | location-park | "Painted Desert Community Complex Historic District", "Shenandoah National Park", "Gramercy Park" |
120
+ | location-road/railway/highway/transit | "Newark-Elizabeth Rail Link", "NJT", "Friern Barnet Road" |
121
+ | organization-company | "Church 's Chicken", "Texas Chicken", "Dixy Chicken" |
122
+ | organization-education | "MIT", "Belfast Royal Academy and the Ulster College of Physical Education", "Barnard College" |
123
+ | organization-government/governmentagency | "Congregazione dei Nobili", "Diet", "Supreme Court" |
124
+ | organization-media/newspaper | "TimeOut Melbourne", "Al Jazeera", "Clash" |
125
+ | organization-other | "IAEA", "4th Army", "Defence Sector C" |
126
+ | organization-politicalparty | "Al Wafa ' Islamic", "Shimpotō", "Kenseitō" |
127
+ | organization-religion | "UPCUSA", "Jewish", "Christian" |
128
+ | organization-showorganization | "Bochumer Symphoniker", "Mr. Mister", "Lizzy" |
129
+ | organization-sportsleague | "First Division", "NHL", "China League One" |
130
+ | organization-sportsteam | "Tottenham", "Arsenal", "Luc Alphand Aventures" |
131
+ | other-astronomything | "Algol", "Zodiac", "`` Caput Larvae ''" |
132
+ | other-award | "Grand Commander of the Order of the Niger", "Order of the Republic of Guinea and Nigeria", "GCON" |
133
+ | other-biologything | "Amphiphysin", "BAR", "N-terminal lipid" |
134
+ | other-chemicalthing | "carbon dioxide", "sulfur", "uranium" |
135
+ | other-currency | "$", "lac crore", "Travancore Rupee" |
136
+ | other-disease | "hypothyroidism", "bladder cancer", "French Dysentery Epidemic of 1779" |
137
+ | other-educationaldegree | "Master", "Bachelor", "BSc ( Hons ) in physics" |
138
+ | other-god | "El", "Fujin", "Raijin" |
139
+ | other-language | "Breton-speaking", "Latin", "English" |
140
+ | other-law | "United States Freedom Support Act", "Thirty Years ' Peace", "Leahy–Smith America Invents Act ( AIA" |
141
+ | other-livingthing | "insects", "patchouli", "monkeys" |
142
+ | other-medical | "amitriptyline", "pediatrician", "Pediatrics" |
143
+ | person-actor | "Tchéky Karyo", "Edmund Payne", "Ellaline Terriss" |
144
+ | person-artist/author | "George Axelrod", "Hicks", "Gaetano Donizett" |
145
+ | person-athlete | "Jaguar", "Neville", "Tozawa" |
146
+ | person-director | "Richard Quine", "Frank Darabont", "Bob Swaim" |
147
+ | person-other | "Campbell", "Richard Benson", "Holden" |
148
+ | person-politician | "Rivière", "Emeric", "William" |
149
+ | person-scholar | "Stedman", "Wurdack", "Stalmine" |
150
+ | person-soldier | "Joachim Ziegler", "Krukenberg", "Helmuth Weidling" |
151
+ | product-airplane | "EC135T2 CPDS", "Spey-equipped FGR.2s", "Luton" |
152
+ | product-car | "Phantom", "Corvettes - GT1 C6R", "100EX" |
153
+ | product-food | "V. labrusca", "red grape", "yakiniku" |
154
+ | product-game | "Hardcore RPG", "Airforce Delta", "Splinter Cell" |
155
+ | product-other | "PDP-1", "Fairbottom Bobs", "X11" |
156
+ | product-ship | "Essex", "Congress", "HMS `` Chinkara ''" |
157
+ | product-software | "Wikipedia", "Apdf", "AmiPDF" |
158
+ | product-train | "55022", "Royal Scots Grey", "High Speed Trains" |
159
+ | product-weapon | "AR-15 's", "ZU-23-2MR Wróbel II", "ZU-23-2M Wróbel" |
160
 
161
+ ## Evaluation
162
+
163
+ ### Metrics
164
+ | Label | Precision | Recall | F1 |
165
+ |:-----------------------------------------|:----------|:-------|:-------|
166
+ | **all** | 0.6890 | 0.6879 | 0.6885 |
167
+ | art-broadcastprogram | 0.6 | 0.5771 | 0.5883 |
168
+ | art-film | 0.7384 | 0.7453 | 0.7419 |
169
+ | art-music | 0.7930 | 0.7221 | 0.7558 |
170
+ | art-other | 0.4245 | 0.2900 | 0.3446 |
171
+ | art-painting | 0.5476 | 0.4035 | 0.4646 |
172
+ | art-writtenart | 0.6400 | 0.6539 | 0.6469 |
173
+ | building-airport | 0.8219 | 0.8242 | 0.8230 |
174
+ | building-hospital | 0.7024 | 0.8104 | 0.7526 |
175
+ | building-hotel | 0.7175 | 0.7283 | 0.7228 |
176
+ | building-library | 0.74 | 0.7296 | 0.7348 |
177
+ | building-other | 0.5828 | 0.5910 | 0.5869 |
178
+ | building-restaurant | 0.5525 | 0.5216 | 0.5366 |
179
+ | building-sportsfacility | 0.6187 | 0.7881 | 0.6932 |
180
+ | building-theater | 0.7067 | 0.7626 | 0.7336 |
181
+ | event-attack/battle/war/militaryconflict | 0.7544 | 0.7468 | 0.7506 |
182
+ | event-disaster | 0.5882 | 0.5314 | 0.5584 |
183
+ | event-election | 0.4167 | 0.2198 | 0.2878 |
184
+ | event-other | 0.4902 | 0.4042 | 0.4430 |
185
+ | event-protest | 0.3643 | 0.2831 | 0.3186 |
186
+ | event-sportsevent | 0.6125 | 0.6239 | 0.6182 |
187
+ | location-GPE | 0.8102 | 0.8553 | 0.8321 |
188
+ | location-bodiesofwater | 0.6888 | 0.7725 | 0.7282 |
189
+ | location-island | 0.7285 | 0.6440 | 0.6836 |
190
+ | location-mountain | 0.7129 | 0.7327 | 0.7227 |
191
+ | location-other | 0.4376 | 0.2560 | 0.3231 |
192
+ | location-park | 0.6991 | 0.6900 | 0.6945 |
193
+ | location-road/railway/highway/transit | 0.6936 | 0.7259 | 0.7094 |
194
+ | organization-company | 0.6921 | 0.6912 | 0.6917 |
195
+ | organization-education | 0.7838 | 0.7963 | 0.7900 |
196
+ | organization-government/governmentagency | 0.5363 | 0.4394 | 0.4831 |
197
+ | organization-media/newspaper | 0.6215 | 0.6705 | 0.6451 |
198
+ | organization-other | 0.5766 | 0.5157 | 0.5444 |
199
+ | organization-politicalparty | 0.6449 | 0.7324 | 0.6859 |
200
+ | organization-religion | 0.5139 | 0.6057 | 0.5560 |
201
+ | organization-showorganization | 0.5620 | 0.5657 | 0.5638 |
202
+ | organization-sportsleague | 0.6348 | 0.6542 | 0.6443 |
203
+ | organization-sportsteam | 0.7138 | 0.7566 | 0.7346 |
204
+ | other-astronomything | 0.7418 | 0.7625 | 0.752 |
205
+ | other-award | 0.7291 | 0.6736 | 0.7002 |
206
+ | other-biologything | 0.6735 | 0.6275 | 0.6497 |
207
+ | other-chemicalthing | 0.6025 | 0.5651 | 0.5832 |
208
+ | other-currency | 0.6843 | 0.8411 | 0.7546 |
209
+ | other-disease | 0.6284 | 0.7089 | 0.6662 |
210
+ | other-educationaldegree | 0.5856 | 0.6033 | 0.5943 |
211
+ | other-god | 0.6089 | 0.6913 | 0.6475 |
212
+ | other-language | 0.6608 | 0.7968 | 0.7225 |
213
+ | other-law | 0.6693 | 0.7246 | 0.6958 |
214
+ | other-livingthing | 0.6070 | 0.6014 | 0.6042 |
215
+ | other-medical | 0.5062 | 0.5113 | 0.5088 |
216
+ | person-actor | 0.8274 | 0.7673 | 0.7962 |
217
+ | person-artist/author | 0.6761 | 0.7294 | 0.7018 |
218
+ | person-athlete | 0.8132 | 0.8347 | 0.8238 |
219
+ | person-director | 0.675 | 0.6823 | 0.6786 |
220
+ | person-other | 0.6472 | 0.6388 | 0.6429 |
221
+ | person-politician | 0.6621 | 0.6593 | 0.6607 |
222
+ | person-scholar | 0.5181 | 0.5007 | 0.5092 |
223
+ | person-soldier | 0.4750 | 0.5131 | 0.4933 |
224
+ | product-airplane | 0.6230 | 0.6717 | 0.6464 |
225
+ | product-car | 0.7293 | 0.7176 | 0.7234 |
226
+ | product-food | 0.5758 | 0.5185 | 0.5457 |
227
+ | product-game | 0.7049 | 0.6734 | 0.6888 |
228
+ | product-other | 0.5477 | 0.4067 | 0.4668 |
229
+ | product-ship | 0.6247 | 0.6395 | 0.6320 |
230
+ | product-software | 0.6497 | 0.6760 | 0.6626 |
231
+ | product-train | 0.5505 | 0.5732 | 0.5616 |
232
+ | product-weapon | 0.6004 | 0.4744 | 0.5300 |
233
+
234
+ ## Uses
235
+
236
+ ### Direct Use for Inference
237
 
238
  ```python
239
  from span_marker import SpanMarkerModel
 
241
  # Download from the 🤗 Hub
242
  model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super")
243
  # Run inference
244
+ entities = model.predict("Most of the Steven Seagal movie \"Under Siege \"(co-starring Tommy Lee Jones) was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and open to the public.")
245
  ```
246
 
247
+ ### Downstream Use
248
+ You can finetune this model on your own dataset.
249
+
250
+ <details><summary>Click to expand</summary>
251
+
252
+ ```python
253
+ from span_marker import SpanMarkerModel, Trainer
254
+
255
+ # Download from the 🤗 Hub
256
+ model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super")
257
+
258
+ # Specify a Dataset with "tokens" and "ner_tag" columns
259
+ dataset = load_dataset("conll2003") # For example CoNLL2003
260
+
261
+ # Initialize a Trainer using the pretrained model & dataset
262
+ trainer = Trainer(
263
+ model=model,
264
+ train_dataset=dataset["train"],
265
+ eval_dataset=dataset["validation"],
266
+ )
267
+ trainer.train()
268
+ trainer.save_model("tomaarsen/span-marker-xlm-roberta-base-fewnerd-fine-super-finetuned")
269
+ ```
270
+ </details>
271
+
272
+ <!--
273
+ ### Out-of-Scope Use
274
+
275
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
276
+ -->
277
+
278
+ <!--
279
+ ## Bias, Risks and Limitations
280
+
281
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
282
+ -->
283
+
284
+ <!--
285
+ ### Recommendations
286
+
287
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
288
+ -->
289
+
290
+ ## Training Details
291
+
292
+ ### Training Set Metrics
293
+ | Training set | Min | Median | Max |
294
+ |:----------------------|:----|:--------|:----|
295
+ | Sentence length | 1 | 24.4945 | 267 |
296
+ | Entities per sentence | 0 | 2.5832 | 88 |
297
+
298
+ ### Training Hyperparameters
299
+ - learning_rate: 1e-05
300
+ - train_batch_size: 16
301
+ - eval_batch_size: 16
302
+ - seed: 42
303
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
304
+ - lr_scheduler_type: linear
305
+ - lr_scheduler_warmup_ratio: 0.1
306
+ - num_epochs: 3
307
+
308
+ ### Training Results
309
+ | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
310
+ |:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
311
+ | 0.2947 | 3000 | 0.0318 | 0.6058 | 0.5990 | 0.6024 | 0.9020 |
312
+ | 0.5893 | 6000 | 0.0266 | 0.6556 | 0.6679 | 0.6617 | 0.9173 |
313
+ | 0.8840 | 9000 | 0.0250 | 0.6691 | 0.6804 | 0.6747 | 0.9206 |
314
+ | 1.1787 | 12000 | 0.0239 | 0.6865 | 0.6761 | 0.6813 | 0.9212 |
315
+ | 1.4733 | 15000 | 0.0234 | 0.6872 | 0.6812 | 0.6842 | 0.9226 |
316
+ | 1.7680 | 18000 | 0.0231 | 0.6919 | 0.6821 | 0.6870 | 0.9227 |
317
+ | 2.0627 | 21000 | 0.0231 | 0.6909 | 0.6871 | 0.6890 | 0.9233 |
318
+ | 2.3573 | 24000 | 0.0231 | 0.6903 | 0.6875 | 0.6889 | 0.9238 |
319
+ | 2.6520 | 27000 | 0.0229 | 0.6918 | 0.6926 | 0.6922 | 0.9242 |
320
+ | 2.9467 | 30000 | 0.0228 | 0.6927 | 0.6930 | 0.6928 | 0.9243 |
321
+
322
+ ### Environmental Impact
323
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
324
+ - **Carbon Emitted**: 0.453 kg of CO2
325
+ - **Hours Used**: 3.118 hours
326
+
327
+ ### Training Hardware
328
+ - **On Cloud**: No
329
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
330
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
331
+ - **RAM Size**: 31.78 GB
332
+
333
+ ### Framework Versions
334
+ - Python: 3.9.16
335
+ - SpanMarker: 1.4.1.dev
336
+ - Transformers: 4.30.0
337
+ - PyTorch: 2.0.1+cu118
338
+ - Datasets: 2.14.0
339
+ - Tokenizers: 0.13.2
340
+
341
+ ## Citation
342
+
343
+ ### BibTeX
344
+ ```
345
+ @software{Aarsen_SpanMarker,
346
+ author = {Aarsen, Tom},
347
+ license = {Apache-2.0},
348
+ title = {{SpanMarker for Named Entity Recognition}},
349
+ url = {https://github.com/tomaarsen/SpanMarkerNER}
350
+ }
351
+ ```
352
+
353
+ <!--
354
+ ## Glossary
355
+
356
+ *Clearly define terms in order to be accessible across audiences.*
357
+ -->
358
+
359
+ <!--
360
+ ## Model Card Authors
361
+
362
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
363
+ -->
364
+
365
+ <!--
366
+ ## Model Card Contact
367
+
368
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
369
+ -->
config.json CHANGED
@@ -1,5 +1,4 @@
1
  {
2
- "_name_or_path": "models\\span_marker_xlm_roberta_base_fewnerd_fine_super_2\\checkpoint-final",
3
  "architectures": [
4
  "SpanMarkerModel"
5
  ],
@@ -208,7 +207,7 @@
208
  "top_p": 1.0,
209
  "torch_dtype": null,
210
  "torchscript": false,
211
- "transformers_version": "4.28.1",
212
  "type_vocab_size": 1,
213
  "typical_p": 1.0,
214
  "use_bfloat16": false,
@@ -222,9 +221,9 @@
222
  "model_max_length": 256,
223
  "model_max_length_default": 512,
224
  "model_type": "span-marker",
225
- "span_marker_version": "1.1.2.dev",
226
  "torch_dtype": "float32",
227
  "trained_with_document_context": false,
228
- "transformers_version": "4.28.1",
229
  "vocab_size": 250004
230
  }
 
1
  {
 
2
  "architectures": [
3
  "SpanMarkerModel"
4
  ],
 
207
  "top_p": 1.0,
208
  "torch_dtype": null,
209
  "torchscript": false,
210
+ "transformers_version": "4.30.0",
211
  "type_vocab_size": 1,
212
  "typical_p": 1.0,
213
  "use_bfloat16": false,
 
221
  "model_max_length": 256,
222
  "model_max_length_default": 512,
223
  "model_type": "span-marker",
224
+ "span_marker_version": "1.4.1.dev",
225
  "torch_dtype": "float32",
226
  "trained_with_document_context": false,
227
+ "transformers_version": "4.30.0",
228
  "vocab_size": 250004
229
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60fe4f5f65948af6c809d5da6a3337475d071a4ea4320f7cd3843b1bf0989d1a
3
- size 1112663221
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c04d295a6caaac64b05a9d47a497eb9326de098916570b4f75a1d2c7007524a9
3
+ size 1112666037
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7becf84dcb8c923f94254aff98d6e2f21190c901a95f5f4ec1059aa70a1215d0
3
- size 17083026
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:990bd951e6640c385b5242633997f977328dd19bd69e889f42c537e21b25dbe9
3
+ size 17083483
tokenizer_config.json CHANGED
@@ -3,7 +3,9 @@
3
  "bos_token": "<s>",
4
  "clean_up_tokenization_spaces": true,
5
  "cls_token": "<s>",
 
6
  "eos_token": "</s>",
 
7
  "mask_token": {
8
  "__type": "AddedToken",
9
  "content": "<mask>",
@@ -12,7 +14,7 @@
12
  "rstrip": false,
13
  "single_word": false
14
  },
15
- "model_max_length": 512,
16
  "pad_token": "<pad>",
17
  "sep_token": "</s>",
18
  "tokenizer_class": "XLMRobertaTokenizer",
 
3
  "bos_token": "<s>",
4
  "clean_up_tokenization_spaces": true,
5
  "cls_token": "<s>",
6
+ "entity_max_length": 8,
7
  "eos_token": "</s>",
8
+ "marker_max_length": 128,
9
  "mask_token": {
10
  "__type": "AddedToken",
11
  "content": "<mask>",
 
14
  "rstrip": false,
15
  "single_word": false
16
  },
17
+ "model_max_length": 256,
18
  "pad_token": "<pad>",
19
  "sep_token": "</s>",
20
  "tokenizer_class": "XLMRobertaTokenizer",