Update README.md
Browse files
README.md
CHANGED
@@ -80,7 +80,9 @@ We trained various models with binary-cross entropy loss and evaluated them on t
|
|
80 |
| Flan-T5-base | 96.85 | 71.10 | 75.71 | 67.07 |
|
81 |
|
82 |
### Recommended reading:
|
83 |
-
[Finally, a decent multi-label classification benchmark is created: a prominent zero-shot dataset.](https://medium.com/p/4d90c9e1c718)
|
|
|
|
|
84 |
|
85 |
### Feedback
|
86 |
We value your input! Share your feedback and suggestions to help us improve our models and datasets.
|
|
|
80 |
| Flan-T5-base | 96.85 | 71.10 | 75.71 | 67.07 |
|
81 |
|
82 |
### Recommended reading:
|
83 |
+
- Check the general overview of the dataset on Medium - [Finally, a decent multi-label classification benchmark is created: a prominent zero-shot dataset.](https://medium.com/p/4d90c9e1c718)
|
84 |
+
|
85 |
+
- Try to train your own model on the datset - [ Multi-Label Classification Model From Scratch: Step-by-Step Tutorial ](https://huggingface.co/blog/Valerii-Knowledgator/multi-label-classification)
|
86 |
|
87 |
### Feedback
|
88 |
We value your input! Share your feedback and suggestions to help us improve our models and datasets.
|