LLMs-in-the-loop Part-1: Expert Small AI Models for Bio-Medical Text Translation
Abstract
Machine translation is indispensable in healthcare for enabling the global dissemination of medical knowledge across languages. However, complex medical terminology poses unique challenges to achieving adequate translation quality and accuracy. This study introduces a novel "LLMs-in-the-loop" approach to develop supervised neural machine translation models optimized specifically for medical texts. While large language models (LLMs) have demonstrated powerful capabilities, this research shows that small, specialized models trained on high-quality in-domain (mostly synthetic) data can outperform even vastly larger LLMs. Custom parallel corpora in six languages were compiled from scientific articles, synthetically generated clinical documents, and medical texts. Our LLMs-in-the-loop methodology employs synthetic data generation, rigorous evaluation, and agent orchestration to enhance performance. We developed small medical translation models using the MarianMT base model. We introduce a new medical translation test dataset to standardize evaluation in this domain. Assessed using BLEU, METEOR, ROUGE, and BERT scores on this test set, our MarianMT-based models outperform Google Translate, DeepL, and GPT-4-Turbo. Results demonstrate that our LLMs-in-the-loop approach, combined with fine-tuning high-quality, domain-specific data, enables specialized models to outperform general-purpose and some larger systems. This research, part of a broader series on expert small models, paves the way for future healthcare-related AI developments, including deidentification and bio-medical entity extraction models. Our study underscores the potential of tailored neural translation models and the LLMs-in-the-loop methodology to advance the field through improved data generation, evaluation, agent, and modeling techniques.
Community
This research, carried out in a very important field healthcare, is very exciting and promising. Good job.
yes. exactly. thanks for feedback
This study is a significant leap in healthcare, showcasing the essential role of machine translation in sharing medical knowledge globally. By using a novel "LLMs-in-the-loop" approach, the research shows how smaller, specialized models can outperform larger ones in translating complex medical texts. The creation of high-quality, domain-specific datasets and rigorous evaluation metrics underscores the potential of tailored neural translation models to advance AI in healthcare. Great work!
Expert Small AI models solve complex problems with the help of LLMs.. thanks
Models citing this paper 14
Browse 14 models citing this paperDatasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper