bhadresh-savani
commited on
Commit
•
967a3d1
1
Parent(s):
1999af3
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,9 @@ metrics:
|
|
15 |
# Distilbert-base-uncased-emotion
|
16 |
|
17 |
## Model description:
|
18 |
-
|
|
|
|
|
19 |
```
|
20 |
learning rate 2e-5,
|
21 |
batch size 64,
|
|
|
15 |
# Distilbert-base-uncased-emotion
|
16 |
|
17 |
## Model description:
|
18 |
+
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
|
19 |
+
|
20 |
+
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
|
21 |
```
|
22 |
learning rate 2e-5,
|
23 |
batch size 64,
|