Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,8 @@ probably proofread and complete it, then remove this comment. -->
|
|
28 |
|
29 |
# i-be-snek/distilbert-base-uncased-finetuned-ner-exp_B
|
30 |
|
31 |
-
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the English subset of the NER [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd) dataset.
|
|
|
32 |
It achieves the following results on the evaluation set:
|
33 |
- Train Loss: 0.0084
|
34 |
- Validation Loss: 0.0587
|
@@ -42,11 +43,10 @@ All scripts for training can be found in this [GitHub repository](https://github
|
|
42 |
|
43 |
## Model description
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
More information needed
|
50 |
|
51 |
## Training and evaluation data
|
52 |
|
|
|
28 |
|
29 |
# i-be-snek/distilbert-base-uncased-finetuned-ner-exp_B
|
30 |
|
31 |
+
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the English subset of the NER [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd) dataset.
|
32 |
+
|
33 |
It achieves the following results on the evaluation set:
|
34 |
- Train Loss: 0.0084
|
35 |
- Validation Loss: 0.0587
|
|
|
43 |
|
44 |
## Model description
|
45 |
|
46 |
+
[distilbert-base-uncased-finetuned-ner-exp_B](https://huggingface.co/i-be-snek/distilbert-base-uncased-finetuned-ner-exp_B) is a Named Entity Recognition model finetuned on [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased).
|
47 |
+
This model is uncased, so it makes no distinction between "sarah" and "Sarah".
|
48 |
+
The dataset it was fine-tuned on was modified. Only five entities were considered: Person (PER), Animal (ANIM), Organization (ORG), Location (LOC), and Disease (DIS).
|
49 |
+
The dataset was modified further so that all other named entities not included in this list were swapped with the '0' label ID. Tokens IDs were also re-indexed.
|
|
|
50 |
|
51 |
## Training and evaluation data
|
52 |
|