LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
Abstract
Efficient fine-tuning is vital for adapting large language models (LLMs) to downstream tasks. However, it requires non-trivial efforts to implement these methods on different models. We present LlamaFactory, a unified framework that integrates a suite of cutting-edge efficient training methods. It allows users to flexibly customize the fine-tuning of 100+ LLMs without the need for coding through the built-in web UI LlamaBoard. We empirically validate the efficiency and effectiveness of our framework on language modeling and text generation tasks. It has been released at https://github.com/hiyouga/LLaMA-Factory and already received over 13,000 stars and 1,600 forks.
Community
Impressive workπ₯ The demo is user friendly and supports Chinese/English/Russian : https://huggingface.co/spaces/hiyouga/LLaMA-Board
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models (2024)
- Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning (2024)
- Fine-tuning Large Language Models for Domain-specific Machine Translation (2024)
- BitDelta: Your Fine-Tune May Only Be Worth One Bit (2024)
- Airavata: Introducing Hindi Instruction-tuned LLM (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 10
Browse 10 models citing this paperDatasets citing this paper 0
No dataset linking this paper