Introduction
In Chapter 2 we explored how to use tokenizers and pretrained models to make predictions. But what if you want to fine-tune a pretrained model for your own dataset? That’s the topic of this chapter! You will learn:
- How to prepare a large dataset from the Hub
- How to use the high-level
Trainer
API to fine-tune a model - How to use a custom training loop
- How to leverage the 🤗 Accelerate library to easily run that custom training loop on any distributed setup
In order to upload your trained checkpoints to the Hugging Face Hub, you will need a huggingface.co account: create an account