NLP Course documentation

Introduction

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Introduction

Ask a Question

In Chapter 2 we explored how to use tokenizers and pretrained models to make predictions. But what if you want to fine-tune a pretrained model for your own dataset? That’s the topic of this chapter! You will learn:

  • How to prepare a large dataset from the Hub
  • How to use the high-level Trainer API to fine-tune a model
  • How to use a custom training loop
  • How to leverage the 🤗 Accelerate library to easily run that custom training loop on any distributed setup

In order to upload your trained checkpoints to the Hugging Face Hub, you will need a huggingface.co account: create an account