Update README.md
Browse files
README.md
CHANGED
@@ -232,21 +232,21 @@ All models trained with max length 512 and batch size 8, using the CoNLL 2002 da
|
|
232 |
|
233 |
|
234 |
## PAWS-X
|
235 |
-
All models trained with max length 512 and batch size 8.
|
236 |
|
237 |
<figure>
|
238 |
|
239 |
| Model | Accuracy |
|
240 |
|----------------------------------------------------|----------|
|
241 |
| bert-base-multilingual-cased | 0.5765 |
|
242 |
-
| dccuchile/bert-base-spanish-wwm-cased | 0.
|
243 |
| BSC-TeMU/roberta-base-bne | 0.5765 |
|
244 |
-
| bertin-project/bertin-roberta-base-spanish | 0.
|
245 |
-
| bertin-project/bertin-base-random | 0.
|
246 |
-
| bertin-project/bertin-base-stepwise | 0.
|
247 |
-
| bertin-project/bertin-base-gaussian |
|
248 |
-
| bertin-project/bertin-base-random-exp-512seqlen | 0.
|
249 |
-
| bertin-project/bertin-base-gaussian-exp-512seqlen | **0.
|
250 |
|
251 |
|
252 |
<caption>Table 5. Results for PAWS-X.</caption>
|
@@ -254,7 +254,6 @@ All models trained with max length 512 and batch size 8. Even though this model
|
|
254 |
|
255 |
|
256 |
## XNLI
|
257 |
-
All models trained with max length 256 and batch size 32. (A set of runs with max length 512 is in progress.)
|
258 |
|
259 |
<figure>
|
260 |
|
@@ -270,7 +269,25 @@ All models trained with max length 256 and batch size 32. (A set of runs with ma
|
|
270 |
| bertin-project/bertin-base-gaussian-exp-512seqlen | 0.7878 |
|
271 |
|
272 |
|
273 |
-
<caption>Table 6. Results for XNLI.</caption>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
274 |
</figure>
|
275 |
|
276 |
# Conclusions
|
|
|
232 |
|
233 |
|
234 |
## PAWS-X
|
235 |
+
All models trained with max length 512 and batch size 8. These numbers are surprising both for the repeated instances of 0.5765 accuracy and for the large differences in performance. However, experiments have been repeated several times and the results are consistent.
|
236 |
|
237 |
<figure>
|
238 |
|
239 |
| Model | Accuracy |
|
240 |
|----------------------------------------------------|----------|
|
241 |
| bert-base-multilingual-cased | 0.5765 |
|
242 |
+
| dccuchile/bert-base-spanish-wwm-cased | 0.8720 |
|
243 |
| BSC-TeMU/roberta-base-bne | 0.5765 |
|
244 |
+
| bertin-project/bertin-roberta-base-spanish | 0.5765 |
|
245 |
+
| bertin-project/bertin-base-random | 0.8800 |
|
246 |
+
| bertin-project/bertin-base-stepwise | 0.8825 |
|
247 |
+
| bertin-project/bertin-base-gaussian | 0.8875 |
|
248 |
+
| bertin-project/bertin-base-random-exp-512seqlen | 0.6735 |
|
249 |
+
| bertin-project/bertin-base-gaussian-exp-512seqlen | **0.8965** |
|
250 |
|
251 |
|
252 |
<caption>Table 5. Results for PAWS-X.</caption>
|
|
|
254 |
|
255 |
|
256 |
## XNLI
|
|
|
257 |
|
258 |
<figure>
|
259 |
|
|
|
269 |
| bertin-project/bertin-base-gaussian-exp-512seqlen | 0.7878 |
|
270 |
|
271 |
|
272 |
+
<caption>Table 6. Results for XNLI with sequence length 256 and batch size 32.</caption>
|
273 |
+
</figure>
|
274 |
+
|
275 |
+
|
276 |
+
<figure>
|
277 |
+
|
278 |
+
| Model | Accuracy |
|
279 |
+
|----------------------------------------------------|----------|
|
280 |
+
| bert-base-multilingual-cased | WIP |
|
281 |
+
| dccuchile/bert-base-spanish-wwm-cased | WIP |
|
282 |
+
| BSC-TeMU/roberta-base-bne | WIP |
|
283 |
+
| bertin-project/bertin-base-random | WIP |
|
284 |
+
| bertin-project/bertin-base-stepwise | WIP |
|
285 |
+
| bertin-project/bertin-base-gaussian | WIP |
|
286 |
+
| bertin-project/bertin-base-random-exp-512seqlen | 0.7799 |
|
287 |
+
| bertin-project/bertin-base-gaussian-exp-512seqlen | 0.7843 |
|
288 |
+
|
289 |
+
|
290 |
+
<caption>Table 7. Results for XNLI with sequence length 512 and batch size 16.</caption>
|
291 |
</figure>
|
292 |
|
293 |
# Conclusions
|