text
stringlengths 2
11.8k
|
---|
Save the model weights as a h5 file extension with model.save_weights and then reload the model with [~TFPreTrainedModel.from_pretrained]:
from transformers import TFPreTrainedModel
from tensorflow import keras
model.save_weights("some_folder/tf_model.h5")
model = TFPreTrainedModel.from_pretrained("some_folder")
Save the model with [~TFPretrainedModel.save_pretrained] and load it again with [~TFPreTrainedModel.from_pretrained]: |
Save the model with [~TFPretrainedModel.save_pretrained] and load it again with [~TFPreTrainedModel.from_pretrained]:
from transformers import TFPreTrainedModel
model.save_pretrained("path_to/model")
model = TFPreTrainedModel.from_pretrained("path_to/model") |
from transformers import TFPreTrainedModel
model.save_pretrained("path_to/model")
model = TFPreTrainedModel.from_pretrained("path_to/model")
ImportError
Another common error you may encounter, especially if it is a newly released model, is ImportError:
ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location)
For these error types, check to make sure you have the latest version of π€ Transformers installed to access the most recent models: |
pip install transformers --upgrade
CUDA error: device-side assert triggered
Sometimes you may run into a generic CUDA error about an error in the device code.
RuntimeError: CUDA error: device-side assert triggered
You should try to run the code on a CPU first to get a more descriptive error message. Add the following environment variable to the beginning of your code to switch to a CPU:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "" |
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
Another option is to get a better traceback from the GPU. Add the following environment variable to the beginning of your code to get the traceback to point to the source of the error:
import os
os.environ["CUDA_LAUNCH_BLOCKING"] = "1" |
import os
os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
Incorrect output when padding tokens aren't masked
In some cases, the output hidden_state may be incorrect if the input_ids include padding tokens. To demonstrate, load a model and tokenizer. You can access a model's pad_token_id to see its value. The pad_token_id may be None for some models, but you can always manually set it. |
from transformers import AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")
model.config.pad_token_id
0
The following example shows the output without masking the padding tokens:
input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]])
output = model(input_ids)
print(output.logits)
tensor([[ 0.0082, -0.2307],
[ 0.1317, -0.1683]], grad_fn=) |
Here is the actual output of the second sequence:
input_ids = torch.tensor([[7592]])
output = model(input_ids)
print(output.logits)
tensor([[-0.1008, -0.4061]], grad_fn=)
Most of the time, you should provide an attention_mask to your model to ignore the padding tokens to avoid this silent error. Now the output of the second sequence matches its actual output:
By default, the tokenizer creates an attention_mask for you based on your specific tokenizer's defaults. |
By default, the tokenizer creates an attention_mask for you based on your specific tokenizer's defaults.
attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]])
output = model(input_ids, attention_mask=attention_mask)
print(output.logits)
tensor([[ 0.0082, -0.2307],
[-0.1008, -0.4061]], grad_fn=)
π€ Transformers doesn't automatically create an attention_mask to mask a padding token if it is provided because: |
π€ Transformers doesn't automatically create an attention_mask to mask a padding token if it is provided because:
Some models don't have a padding token.
For some use-cases, users want a model to attend to a padding token. |
ValueError: Unrecognized configuration class XYZ for this kind of AutoModel
Generally, we recommend using the [AutoModel] class to load pretrained instances of models. This class
can automatically infer and load the correct architecture from a given checkpoint based on the configuration. If you see
this ValueError when loading a model from a checkpoint, this means the Auto class couldn't find a mapping from
the configuration in the given checkpoint to the kind of model you are trying to load. Most commonly, this happens when a
checkpoint doesn't support a given task.
For instance, you'll see this error in the following example because there is no GPT2 for question answering: |
from transformers import AutoProcessor, AutoModelForQuestionAnswering
processor = AutoProcessor.from_pretrained("openai-community/gpt2-medium")
model = AutoModelForQuestionAnswering.from_pretrained("openai-community/gpt2-medium")
ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, |
Export to ONNX
Deploying π€ Transformers models in production environments often requires, or can benefit from exporting the models into
a serialized format that can be loaded and executed on specialized runtimes and hardware.
π€ Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats
such as ONNX and TFLite through its exporters module. π€ Optimum also provides a set of performance optimization tools to train
and run models on targeted hardware with maximum efficiency.
This guide demonstrates how you can export π€ Transformers models to ONNX with π€ Optimum, for the guide on exporting models to TFLite,
please refer to the Export to TFLite page.
Export to ONNX
ONNX (Open Neural Network eXchange) is an open standard that defines a common set of operators and a
common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an intermediate representation) which
represents the flow of data through the neural network.
By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorFlow (and vice versa).
Once exported to ONNX format, a model can be:
- optimized for inference via techniques such as graph optimization and quantization.
- run with ONNX Runtime via ORTModelForXXX classes,
which follow the same AutoModel API as the one you are used to in π€ Transformers.
- run with optimized inference pipelines,
which has the same API as the [pipeline] function in π€ Transformers.
π€ Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come
ready-made for a number of model architectures, and are designed to be easily extendable to other architectures.
For the list of ready-made configurations, please refer to π€ Optimum documentation.
There are two ways to export a π€ Transformers model to ONNX, here we show both: |
export with π€ Optimum via CLI.
export with π€ Optimum with optimum.onnxruntime.
Exporting a π€ Transformers model to ONNX with CLI
To export a π€ Transformers model to ONNX, first install an extra dependency:
pip install optimum[exporters]
To check out all available arguments, refer to the π€ Optimum docs,
or view help in command line:
optimum-cli export onnx --help
To export a model's checkpoint from the π€ Hub, for example, distilbert/distilbert-base-uncased-distilled-squad, run the following command: |
optimum-cli export onnx --help
To export a model's checkpoint from the π€ Hub, for example, distilbert/distilbert-base-uncased-distilled-squad, run the following command:
optimum-cli export onnx --model distilbert/distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/
You should see the logs indicating progress and showing where the resulting model.onnx is saved, like this: |
Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx
-[β] ONNX model output names match reference model (start_logits, end_logits)
- Validating ONNX Model output "start_logits":
-[β] (2, 16) matches (2, 16)
-[β] all values close (atol: 0.0001)
- Validating ONNX Model output "end_logits":
-[β] (2, 16) matches (2, 16)
-[β] all values close (atol: 0.0001)
The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx
The example above illustrates exporting a checkpoint from π€ Hub. When exporting a local model, first make sure that you
saved both the model's weights and tokenizer files in the same directory (local_path). When using CLI, pass the
local_path to the model argument instead of the checkpoint name on π€ Hub and provide the --task argument.
You can review the list of supported tasks in the π€ Optimum documentation.
If task argument is not provided, it will default to the model architecture without any task specific head. |
optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/
The resulting model.onnx file can then be run on one of the many
accelerators that support the ONNX
standard. For example, we can load and run the model with ONNX
Runtime as follows:
thon |
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx")
model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx")
inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt")
outputs = model(**inputs) |
The process is identical for TensorFlow checkpoints on the Hub. For instance, here's how you would
export a pure TensorFlow checkpoint from the Keras organization:
optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/
Exporting a π€ Transformers model to ONNX with optimum.onnxruntime
Alternative to CLI, you can export a π€ Transformers model to ONNX programmatically like so:
thon |
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer
model_checkpoint = "distilbert_base_uncased_squad"
save_directory = "onnx/"
Load a model from transformers and export it to ONNX
ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
Save the onnx model and tokenizer
ort_model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory) |
Exporting a model for an unsupported architecture
If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is
supported in optimum.exporters.onnx,
and if it is not, contribute to π€ Optimum
directly.
Exporting a model with transformers.onnx
tranformers.onnx is no longer maintained, please export models with π€ Optimum as described above. This section will be removed in the future versions. |
tranformers.onnx is no longer maintained, please export models with π€ Optimum as described above. This section will be removed in the future versions.
To export a π€ Transformers model to ONNX with tranformers.onnx, install extra dependencies:
pip install transformers[onnx]
Use transformers.onnx package as a Python module to export a checkpoint using a ready-made configuration: |
python -m transformers.onnx --model=distilbert/distilbert-base-uncased onnx/
This exports an ONNX graph of the checkpoint defined by the --model argument. Pass any checkpoint on the π€ Hub or one that's stored locally.
The resulting model.onnx file can then be run on one of the many accelerators that support the ONNX standard. For example,
load and run the model with ONNX Runtime as follows:
thon |
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
session = InferenceSession("onnx/model.onnx")
ONNX Runtime expects NumPy arrays as input
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) |
The required output names (like ["last_hidden_state"]) can be obtained by taking a look at the ONNX configuration of
each model. For example, for DistilBERT we have:
thon
from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
config = DistilBertConfig()
onnx_config = DistilBertOnnxConfig(config)
print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so: |
The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so:
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
To export a model that's stored locally, save the model's weights and tokenizer files in the same directory (e.g. local-pt-checkpoint),
then export it to ONNX by pointing the --model argument of the transformers.onnx package to the desired directory:
python -m transformers.onnx --model=local-pt-checkpoint onnx/ |
Fine-tune a pretrained model
[[open-in-colab]]
There are significant benefits to using a pretrained model. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. π€ Transformers provides access to thousands of pretrained models for a wide range of tasks. When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: |
Fine-tune a pretrained model with π€ Transformers [Trainer].
Fine-tune a pretrained model in TensorFlow with Keras.
Fine-tune a pretrained model in native PyTorch.
Prepare a dataset
Before you can fine-tune a pretrained model, download a dataset and prepare it for training. The previous tutorial showed you how to process data for training, and now you get an opportunity to put those skills to the test!
Begin by loading the Yelp Reviews dataset: |
from datasets import load_dataset
dataset = load_dataset("yelp_review_full")
dataset["train"][100]
{'label': 0,
'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularlythat takes something special!\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \"serving off their orders\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} |
As you now know, you need a tokenizer to process the text and include a padding and truncation strategy to handle any variable sequence lengths. To process your dataset in one step, use π€ Datasets map method to apply a preprocessing function over the entire dataset: |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
If you like, you can create a smaller subset of the full dataset to fine-tune on to reduce the time it takes: |
If you like, you can create a smaller subset of the full dataset to fine-tune on to reduce the time it takes:
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) |
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
Train
At this point, you should follow the section corresponding to the framework you want to use. You can use the links
in the right sidebar to jump to the one you want - and if you want to hide all of the content for a given framework,
just use the button at the top-right of that framework's block! |
Train with PyTorch Trainer
π€ Transformers provides a [Trainer] class optimized for training π€ Transformers models, making it easier to start training without manually writing your own training loop. The [Trainer] API supports a wide range of training options and features such as logging, gradient accumulation, and mixed precision.
Start by loading your model and specify the number of expected labels. From the Yelp Review dataset card, you know there are five labels: |
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) |
You will see a warning about some of the pretrained weights not being used and some weights being randomly
initialized. Don't worry, this is completely normal! The pretrained head of the BERT model is discarded, and replaced with a randomly initialized classification head. You will fine-tune this new model head on your sequence classification task, transferring the knowledge of the pretrained model to it. |
Training hyperparameters
Next, create a [TrainingArguments] class which contains all the hyperparameters you can tune as well as flags for activating different training options. For this tutorial you can start with the default training hyperparameters, but feel free to experiment with these to find your optimal settings.
Specify where to save the checkpoints from your training:
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir="test_trainer") |
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir="test_trainer")
Evaluate
[Trainer] does not automatically evaluate model performance during training. You'll need to pass [Trainer] a function to compute and report metrics. The π€ Evaluate library provides a simple accuracy function you can load with the [evaluate.load] (see this quicktour for more information) function:
import numpy as np
import evaluate
metric = evaluate.load("accuracy") |
import numpy as np
import evaluate
metric = evaluate.load("accuracy")
Call [~evaluate.compute] on metric to calculate the accuracy of your predictions. Before passing your predictions to compute, you need to convert the logits to predictions (remember all π€ Transformers models return logits):
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels) |
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
If you'd like to monitor your evaluation metrics during fine-tuning, specify the evaluation_strategy parameter in your training arguments to report the evaluation metric at the end of each epoch: |
If you'd like to monitor your evaluation metrics during fine-tuning, specify the evaluation_strategy parameter in your training arguments to report the evaluation metric at the end of each epoch:
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
Trainer
Create a [Trainer] object with your model, training arguments, training and test datasets, and evaluation function: |
Trainer
Create a [Trainer] object with your model, training arguments, training and test datasets, and evaluation function:
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
Then fine-tune your model by calling [~transformers.Trainer.train]:
trainer.train() |
Train a TensorFlow model with Keras
You can also train π€ Transformers models in TensorFlow with the Keras API!
Loading data for Keras
When you want to train a π€ Transformers model with the Keras API, you need to convert your dataset to a format that
Keras understands. If your dataset is small, you can just convert the whole thing to NumPy arrays and pass it to Keras.
Let's try that first before we do anything more complicated.
First, load a dataset. We'll use the CoLA dataset from the GLUE benchmark,
since it's a simple binary text classification task, and just take the training split for now. |
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # Just take the training split for now
Next, load a tokenizer and tokenize the data as NumPy arrays. Note that the labels are already a list of 0 and 1s,
so we can just convert that directly to a NumPy array without tokenization! |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras
tokenized_data = dict(tokenized_data)
labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 |
Finally, load, compile, and fit the model. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: |
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")
Lower learning rates are often better for fine-tuning transformers
model.compile(optimizer=Adam(3e-5)) # No loss argument!
model.fit(tokenized_data, labels) |
You don't have to pass a loss argument to your models when you compile() them! Hugging Face models automatically
choose a loss that is appropriate for their task and model architecture if this argument is left blank. You can always
override this by specifying a loss yourself if you want to! |
This approach works great for smaller datasets, but for larger datasets, you might find it starts to become a problem. Why?
Because the tokenized array and labels would have to be fully loaded into memory, and because NumPy doesnβt handle
βjaggedβ arrays, so every tokenized sample would have to be padded to the length of the longest sample in the whole
dataset. Thatβs going to make your array even bigger, and all those padding tokens will slow down training too!
Loading data as a tf.data.Dataset
If you want to avoid slowing down training, you can load your data as a tf.data.Dataset instead. Although you can write your own
tf.data pipeline if you want, we have two convenience methods for doing this: |
[~TFPreTrainedModel.prepare_tf_dataset]: This is the method we recommend in most cases. Because it is a method
on your model, it can inspect the model to automatically figure out which columns are usable as model inputs, and
discard the others to make a simpler, more performant dataset.
[~datasets.Dataset.to_tf_dataset]: This method is more low-level, and is useful when you want to exactly control how
your dataset is created, by specifying exactly which columns and label_cols to include. |
Before you can use [~TFPreTrainedModel.prepare_tf_dataset], you will need to add the tokenizer outputs to your dataset as columns, as shown in
the following code sample:
def tokenize_dataset(data):
# Keys of the returned dictionary will be added to the dataset as columns
return tokenizer(data["text"])
dataset = dataset.map(tokenize_dataset) |
def tokenize_dataset(data):
# Keys of the returned dictionary will be added to the dataset as columns
return tokenizer(data["text"])
dataset = dataset.map(tokenize_dataset)
Remember that Hugging Face datasets are stored on disk by default, so this will not inflate your memory usage! Once the
columns have been added, you can stream batches from the dataset and add padding to each batch, which greatly
reduces the number of padding tokens compared to padding the entire dataset. |
tf_dataset = model.prepare_tf_dataset(dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer) |
Note that in the code sample above, you need to pass the tokenizer to prepare_tf_dataset so it can correctly pad batches as they're loaded.
If all the samples in your dataset are the same length and no padding is necessary, you can skip this argument.
If you need to do something more complex than just padding samples (e.g. corrupting tokens for masked language
modelling), you can use the collate_fn argument instead to pass a function that will be called to transform the
list of samples into a batch and apply any preprocessing you want. See our
examples or
notebooks to see this approach in action.
Once you've created a tf.data.Dataset, you can compile and fit the model as before: |
model.compile(optimizer=Adam(3e-5)) # No loss argument!
model.fit(tf_dataset)
Train in native PyTorch |
Train in native PyTorch
[Trainer] takes care of the training loop and allows you to fine-tune a model in a single line of code. For users who prefer to write their own training loop, you can also fine-tune a π€ Transformers model in native PyTorch.
At this point, you may need to restart your notebook or execute the following code to free some memory:
py
del model
del trainer
torch.cuda.empty_cache()
Next, manually postprocess tokenized_dataset to prepare it for training. |
Remove the text column because the model does not accept raw text as an input:
tokenized_datasets = tokenized_datasets.remove_columns(["text"])
Rename the label column to labels because the model expects the argument to be named labels:
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
Set the format of the dataset to return PyTorch tensors instead of lists:
tokenized_datasets.set_format("torch") |
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
Set the format of the dataset to return PyTorch tensors instead of lists:
tokenized_datasets.set_format("torch")
Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning:
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) |
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
DataLoader
Create a DataLoader for your training and test datasets so you can iterate over batches of data:
from torch.utils.data import DataLoader
train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) |
from torch.utils.data import DataLoader
train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
eval_dataloader = DataLoader(small_eval_dataset, batch_size=8)
Load your model with the number of expected labels:
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) |
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5)
Optimizer and learning rate scheduler
Create an optimizer and learning rate scheduler to fine-tune the model. Let's use the AdamW optimizer from PyTorch:
from torch.optim import AdamW
optimizer = AdamW(model.parameters(), lr=5e-5)
Create the default learning rate scheduler from [Trainer]: |
from torch.optim import AdamW
optimizer = AdamW(model.parameters(), lr=5e-5)
Create the default learning rate scheduler from [Trainer]:
from transformers import get_scheduler
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
) |
Lastly, specify device to use a GPU if you have access to one. Otherwise, training on a CPU may take several hours instead of a couple of minutes.
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
Get free access to a cloud GPU if you don't have one with a hosted notebook like Colaboratory or SageMaker StudioLab. |
Get free access to a cloud GPU if you don't have one with a hosted notebook like Colaboratory or SageMaker StudioLab.
Great, now you are ready to train! π₯³
Training loop
To keep track of your training progress, use the tqdm library to add a progress bar over the number of training steps: |
Great, now you are ready to train! π₯³
Training loop
To keep track of your training progress, use the tqdm library to add a progress bar over the number of training steps:
from tqdm.auto import tqdm
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
loss.backward() |
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
Evaluate
Just like how you added an evaluation function to [Trainer], you need to do the same when you write your own training loop. But instead of calculating and reporting the metric at the end of each epoch, this time you'll accumulate all the batches with [~evaluate.add_batch] and calculate the metric at the very end. |
import evaluate
metric = evaluate.load("accuracy")
model.eval()
for batch in eval_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
metric.compute()
Additional resources
For more fine-tuning examples, refer to: |
metric.compute()
Additional resources
For more fine-tuning examples, refer to:
π€ Transformers Examples includes scripts
to train common NLP tasks in PyTorch and TensorFlow.
π€ Transformers Notebooks contains various notebooks on how to fine-tune a model for specific tasks in PyTorch and TensorFlow. |
How π€ Transformers solve tasks
In What π€ Transformers can do, you learned about natural language processing (NLP), speech and audio, computer vision tasks, and some important applications of them. This page will look closely at how models solve these tasks and explain what's happening under the hood. There are many ways to solve a given task, some models may implement certain techniques or even approach the task from a new angle, but for Transformer models, the general idea is the same. Owing to its flexible architecture, most models are a variant of an encoder, decoder, or encoder-decoder structure. In addition to Transformer models, our library also has several convolutional neural networks (CNNs), which are still used today for computer vision tasks. We'll also explain how a modern CNN works.
To explain how tasks are solved, we'll walk through what goes on inside the model to output useful predictions. |
Wav2Vec2 for audio classification and automatic speech recognition (ASR)
Vision Transformer (ViT) and ConvNeXT for image classification
DETR for object detection
Mask2Former for image segmentation
GLPN for depth estimation
BERT for NLP tasks like text classification, token classification and question answering that use an encoder
GPT2 for NLP tasks like text generation that use a decoder
BART for NLP tasks like summarization and translation that use an encoder-decoder |
Before you go further, it is good to have some basic knowledge of the original Transformer architecture. Knowing how encoders, decoders, and attention work will aid you in understanding how different Transformer models work. If you're just getting started or need a refresher, check out our course for more information!
Speech and audio
Wav2Vec2 is a self-supervised model pretrained on unlabeled speech data and finetuned on labeled data for audio classification and automatic speech recognition. |
Speech and audio
Wav2Vec2 is a self-supervised model pretrained on unlabeled speech data and finetuned on labeled data for audio classification and automatic speech recognition.
This model has four main components:
A feature encoder takes the raw audio waveform, normalizes it to zero mean and unit variance, and converts it into a sequence of feature vectors that are each 20ms long. |
Waveforms are continuous by nature, so they can't be divided into separate units like a sequence of text can be split into words. That's why the feature vectors are passed to a quantization module, which aims to learn discrete speech units. The speech unit is chosen from a collection of codewords, known as a codebook (you can think of this as the vocabulary). From the codebook, the vector or speech unit, that best represents the continuous audio input is chosen and forwarded through the model. |
About half of the feature vectors are randomly masked, and the masked feature vector is fed to a context network, which is a Transformer encoder that also adds relative positional embeddings.
The pretraining objective of the context network is a contrastive task. The model has to predict the true quantized speech representation of the masked prediction from a set of false ones, encouraging the model to find the most similar context vector and quantized speech unit (the target label). |
Now that wav2vec2 is pretrained, you can finetune it on your data for audio classification or automatic speech recognition!
Audio classification
To use the pretrained model for audio classification, add a sequence classification head on top of the base Wav2Vec2 model. The classification head is a linear layer that accepts the encoder's hidden states. The hidden states represent the learned features from each audio frame which can have varying lengths. To create one vector of fixed-length, the hidden states are pooled first and then transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and target to find the most likely class.
Ready to try your hand at audio classification? Check out our complete audio classification guide to learn how to finetune Wav2Vec2 and use it for inference!
Automatic speech recognition
To use the pretrained model for automatic speech recognition, add a language modeling head on top of the base Wav2Vec2 model for connectionist temporal classification (CTC). The language modeling head is a linear layer that accepts the encoder's hidden states and transforms them into logits. Each logit represents a token class (the number of tokens comes from the task vocabulary). The CTC loss is calculated between the logits and targets to find the most likely sequence of tokens, which are then decoded into a transcription.
Ready to try your hand at automatic speech recognition? Check out our complete automatic speech recognition guide to learn how to finetune Wav2Vec2 and use it for inference!
Computer vision
There are two ways to approach computer vision tasks: |
Split an image into a sequence of patches and process them in parallel with a Transformer.
Use a modern CNN, like ConvNeXT, which relies on convolutional layers but adopts modern network designs.
A third approach mixes Transformers with convolutions (for example, Convolutional Vision Transformer or LeViT). We won't discuss those because they just combine the two approaches we examine here. |
ViT and ConvNeXT are commonly used for image classification, but for other vision tasks like object detection, segmentation, and depth estimation, we'll look at DETR, Mask2Former and GLPN, respectively; these models are better suited for those tasks.
Image classification
ViT and ConvNeXT can both be used for image classification; the main difference is that ViT uses an attention mechanism while ConvNeXT uses convolutions.
Transformer
ViT replaces convolutions entirely with a pure Transformer architecture. If you're familiar with the original Transformer, then you're already most of the way toward understanding ViT. |
The main change ViT introduced was in how images are fed to a Transformer: |
An image is split into square non-overlapping patches, each of which gets turned into a vector or patch embedding. The patch embeddings are generated from a convolutional 2D layer which creates the proper input dimensions (which for a base Transformer is 768 values for each patch embedding). If you had a 224x224 pixel image, you could split it into 196 16x16 image patches. Just like how text is tokenized into words, an image is "tokenized" into a sequence of patches. |
A learnable embedding - a special [CLS] token - is added to the beginning of the patch embeddings just like BERT. The final hidden state of the [CLS] token is used as the input to the attached classification head; other outputs are ignored. This token helps the model learn how to encode a representation of the image. |
The last thing to add to the patch and learnable embeddings are the position embeddings because the model doesn't know how the image patches are ordered. The position embeddings are also learnable and have the same size as the patch embeddings. Finally, all of the embeddings are passed to the Transformer encoder. |
The output, specifically only the output with the [CLS] token, is passed to a multilayer perceptron head (MLP). ViT's pretraining objective is simply classification. Like other classification heads, the MLP head converts the output into logits over the class labels and calculates the cross-entropy loss to find the most likely class.
Ready to try your hand at image classification? Check out our complete image classification guide to learn how to finetune ViT and use it for inference!
CNN |
Ready to try your hand at image classification? Check out our complete image classification guide to learn how to finetune ViT and use it for inference!
CNN
This section briefly explains convolutions, but it'd be helpful to have a prior understanding of how they change an image's shape and size. If you're unfamiliar with convolutions, check out the Convolution Neural Networks chapter from the fastai book! |
ConvNeXT is a CNN architecture that adopts new and modern network designs to improve performance. However, convolutions are still at the core of the model. From a high-level perspective, a convolution is an operation where a smaller matrix (kernel) is multiplied by a small window of the image pixels. It computes some features from it, such as a particular texture or curvature of a line. Then it slides over to the next window of pixels; the distance the convolution travels is known as the stride. |
A basic convolution without padding or stride, taken from A guide to convolution arithmetic for deep learning.
You can feed this output to another convolutional layer, and with each successive layer, the network learns more complex and abstract things like hotdogs or rockets. Between convolutional layers, it is common to add a pooling layer to reduce dimensionality and make the model more robust to variations of a feature's position.
ConvNeXT modernizes a CNN in five ways: |
ConvNeXT modernizes a CNN in five ways:
Change the number of blocks in each stage and "patchify" an image with a larger stride and corresponding kernel size. The non-overlapping sliding window makes this patchifying strategy similar to how ViT splits an image into patches. |
A bottleneck layer shrinks the number of channels and then restores it because it is faster to do a 1x1 convolution, and you can increase the depth. An inverted bottleneck does the opposite by expanding the number of channels and shrinking them, which is more memory efficient. |
Replace the typical 3x3 convolutional layer in the bottleneck layer with depthwise convolution, which applies a convolution to each input channel separately and then stacks them back together at the end. This widens the network width for improved performance.
ViT has a global receptive field which means it can see more of an image at once thanks to its attention mechanism. ConvNeXT attempts to replicate this effect by increasing the kernel size to 7x7. |
ViT has a global receptive field which means it can see more of an image at once thanks to its attention mechanism. ConvNeXT attempts to replicate this effect by increasing the kernel size to 7x7.
ConvNeXT also makes several layer design changes that imitate Transformer models. There are fewer activation and normalization layers, the activation function is switched to GELU instead of ReLU, and it uses LayerNorm instead of BatchNorm. |
The output from the convolution blocks is passed to a classification head which converts the outputs into logits and calculates the cross-entropy loss to find the most likely label.
Object detection
DETR, DEtection TRansformer, is an end-to-end object detection model that combines a CNN with a Transformer encoder-decoder. |
A pretrained CNN backbone takes an image, represented by its pixel values, and creates a low-resolution feature map of it. A 1x1 convolution is applied to the feature map to reduce dimensionality and it creates a new feature map with a high-level image representation. Since the Transformer is a sequential model, the feature map is flattened into a sequence of feature vectors that are combined with positional embeddings. |
The feature vectors are passed to the encoder, which learns the image representations using its attention layers. Next, the encoder hidden states are combined with object queries in the decoder. Object queries are learned embeddings that focus on the different regions of an image, and they're updated as they progress through each attention layer. The decoder hidden states are passed to a feedforward network that predicts the bounding box coordinates and class label for each object query, or no object if there isn't one.
DETR decodes each object query in parallel to output N final predictions, where N is the number of queries. Unlike a typical autoregressive model that predicts one element at a time, object detection is a set prediction task (bounding box, class label) that makes N predictions in a single pass. |
DETR uses a bipartite matching loss during training to compare a fixed number of predictions with a fixed set of ground truth labels. If there are fewer ground truth labels in the set of N labels, then they're padded with a no object class. This loss function encourages DETR to find a one-to-one assignment between the predictions and ground truth labels. If either the bounding boxes or class labels aren't correct, a loss is incurred. Likewise, if DETR predicts an object that doesn't exist, it is penalized. This encourages DETR to find other objects in an image instead of focusing on one really prominent object. |
An object detection head is added on top of DETR to find the class label and the coordinates of the bounding box. There are two components to the object detection head: a linear layer to transform the decoder hidden states into logits over the class labels, and a MLP to predict the bounding box.
Ready to try your hand at object detection? Check out our complete object detection guide to learn how to finetune DETR and use it for inference!
Image segmentation
Mask2Former is a universal architecture for solving all types of image segmentation tasks. Traditional segmentation models are typically tailored towards a particular subtask of image segmentation, like instance, semantic or panoptic segmentation. Mask2Former frames each of those tasks as a mask classification problem. Mask classification groups pixels into N segments, and predicts N masks and their corresponding class label for a given image. We'll explain how Mask2Former works in this section, and then you can try finetuning SegFormer at the end. |
There are three main components to Mask2Former:
A Swin backbone accepts an image and creates a low-resolution image feature map from 3 consecutive 3x3 convolutions.
The feature map is passed to a pixel decoder which gradually upsamples the low-resolution features into high-resolution per-pixel embeddings. The pixel decoder actually generates multi-scale features (contains both low- and high-resolution features) with resolutions 1/32, 1/16, and 1/8th of the original image. |
Each of these feature maps of differing scales is fed successively to one Transformer decoder layer at a time in order to capture small objects from the high-resolution features. The key to Mask2Former is the masked attention mechanism in the decoder. Unlike cross-attention which can attend to the entire image, masked attention only focuses on a certain area of the image. This is faster and leads to better performance because the local features of an image are enough for the model to learn from. |
Like DETR, Mask2Former also uses learned object queries and combines them with the image features from the pixel decoder to make a set prediction (class label, mask prediction). The decoder hidden states are passed into a linear layer and transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and class label to find the most likely one.
The mask predictions are generated by combining the pixel-embeddings with the final decoder hidden states. The sigmoid cross-entropy and dice loss is calculated between the logits and the ground truth mask to find the most likely mask. |
Ready to try your hand at object detection? Check out our complete image segmentation guide to learn how to finetune SegFormer and use it for inference!
Depth estimation
GLPN, Global-Local Path Network, is a Transformer for depth estimation that combines a SegFormer encoder with a lightweight decoder. |
Like ViT, an image is split into a sequence of patches, except these image patches are smaller. This is better for dense prediction tasks like segmentation or depth estimation. The image patches are transformed into patch embeddings (see the image classification section for more details about how patch embeddings are created), which are fed to the encoder. |
The encoder accepts the patch embeddings, and passes them through several encoder blocks. Each block consists of attention and Mix-FFN layers. The purpose of the latter is to provide positional information. At the end of each encoder block is a patch merging layer for creating hierarchical representations. The features of each group of neighboring patches are concatenated, and a linear layer is applied to the concatenated features to reduce the number of patches to a resolution of 1/4. This becomes the input to the next encoder block, where this whole process is repeated until you have image features with resolutions of 1/8, 1/16, and 1/32. |