Translation Errors Significantly Impact Low-Resource Languages in Cross-Lingual Learning
Abstract
Popular benchmarks (e.g., XNLI) used to evaluate cross-lingual language understanding consist of parallel versions of English evaluation sets in multiple target languages created with the help of professional translators. When creating such parallel data, it is critical to ensure high-quality translations for all target languages for an accurate characterization of cross-lingual transfer. In this work, we find that translation inconsistencies do exist and interestingly they disproportionally impact low-resource languages in XNLI. To identify such inconsistencies, we propose measuring the gap in performance between zero-shot evaluations on the human-translated and machine-translated target text across multiple target languages; relatively large gaps are indicative of translation errors. We also corroborate that translation errors exist for two target languages, namely Hindi and Urdu, by doing a manual reannotation of human-translated test instances in these two languages and finding poor agreement with the original English labels these instances were supposed to inherit.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- xCoT: Cross-lingual Instruction Tuning for Cross-lingual Chain-of-Thought Reasoning (2024)
- Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models (2024)
- Constrained Decoding for Cross-lingual Label Projection (2024)
- Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages (2024)
- Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper