Tokenizer¶
A tokenizer is in charge of preparing the inputs for a model. The library comprise tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library tokenizers. The “Fast” implementations allows (1) a significant speed-up in particular when doing batched tokenization and (2) additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). Currently no “Fast” implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLMRoBERTa and XLNet models).
The base classes PreTrainedTokenizer
and PreTrainedTokenizerFast
implements the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and “Fast” tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository).
PreTrainedTokenizer
and PreTrainedTokenizerFast
thus implements the main methods for using all the tokenizers:
tokenizing (spliting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e. tokenizing + convert to integers),
adding new tokens to the vocabulary in a way that is independant of the underlying structure (BPE, SentencePiece…),
managing special tokens like mask, beginning-of-sentence, etc tokens (adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization)
BatchEncoding
holds the output of the tokenizer’s encoding methods (encode_plus
and batch_encode_plus
) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behave just like a standard python dictionary and hold the various model inputs computed by these methodes (input_ids
, attention_mask
…). When the tokenizer is a “Fast” tokenizer (i.e. backed by HuggingFace tokenizers library), this class provides in addition several advanced alignement methods which can be used to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).
PreTrainedTokenizer
¶
-
class
transformers.
PreTrainedTokenizer
(model_max_length=None, **kwargs)[source]¶ Base class for all tokenizers.
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes):
vocab_files_names
: a pythondict
with, as keys, the__init__
keyword name of each vocabulary filerequired by the model, and as associated values, the filename for saving the associated file (string).
pretrained_vocab_files_map
: a pythondict of dict
the high-level keysbeing the
__init__
keyword name of each vocabulary file required by the model, the low-level being the short-cut-names (string) of the pretrained models with, as associated values, the url (string) to the associated pretrained vocabulary file.
max_model_input_sizes
: a pythondict
with, as keys, the short-cut-names (string) of the pretrainedmodels, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.
pretrained_init_configuration
: a pythondict
with, as keys, the short-cut-names (string) of thepretrained models, and as associated values, a dictionnary of specific arguments to pass to the
__init__``method of the tokenizer class for this pretrained model when loading the tokenizer with the ``from_pretrained()
method.
- Parameters
model_max_length (-) – (Optional) int: the maximum length in number of tokens for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained, this will be set to the value stored for the associated model in
max_model_input_sizes
(see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)). no associated max_length can be found inmax_model_input_sizes
.padding_side (-) – (Optional) string: the side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]
model_input_names (-) – (Optional) List[string]: the list of the forward pass inputs accepted by the model (“token_type_ids”, “attention_mask”…).
bos_token (-) – (Optional) string: a beginning of sentence token. Will be associated to
self.bos_token
andself.bos_token_id
eos_token (-) – (Optional) string: an end of sentence token. Will be associated to
self.eos_token
andself.eos_token_id
unk_token (-) – (Optional) string: an unknown token. Will be associated to
self.unk_token
andself.unk_token_id
sep_token (-) – (Optional) string: a separation token (e.g. to separate context and query in an input sequence). Will be associated to
self.sep_token
andself.sep_token_id
pad_token (-) – (Optional) string: a padding token. Will be associated to
self.pad_token
andself.pad_token_id
cls_token (-) – (Optional) string: a classification token (e.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model). Will be associated to
self.cls_token
andself.cls_token_id
mask_token (-) – (Optional) string: a masking token (e.g. when training a model with masked-language modeling). Will be associated to
self.mask_token
andself.mask_token_id
additional_special_tokens (-) – (Optional) list: a list of additional special tokens. Adding all special tokens here ensure they won’t be split by the tokenization process. Will be associated to
self.additional_special_tokens
andself.additional_special_tokens_ids
-
add_special_tokens
(special_tokens_dict)[source]¶ Add a dictionary of special tokens (eos, pad, cls…) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
Using add_special_tokens will ensure your special tokens can be used in several ways:
special tokens are carefully handled by the tokenizer (they are never split)
you can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be ‘[CLS]’ and XLM’s one is also registered to be ‘</s>’)
- Parameters
special_tokens_dict –
dict of string. Keys should be in the list of predefined special attributes: [
bos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
].Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the
unk_token
to them).- Returns
Number of tokens added to the vocabulary.
Examples:
# Let's see how to add a new classification token to GPT-2 tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') special_tokens_dict = {'cls_token': '<CLS>'} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. assert tokenizer.cls_token == '<CLS>'
-
add_tokens
(new_tokens: Union[str, List[str]]) → int[source]¶ Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary.
- Parameters
new_tokens – string or list of string. Each string is a token to add. Tokens are only added if they are not
in the vocabulary (already) –
- Returns
Number of tokens added to the vocabulary.
Examples:
# Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
-
batch_encode_plus
(batch_text_or_text_pairs: Union[List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], List[List[int]], List[Tuple[List[int], List[int]]]], add_special_tokens: bool = True, max_length: Optional[int] = None, stride: int = 0, truncation_strategy: str = 'longest_first', pad_to_max_length: bool = False, is_pretokenized: bool = False, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_masks: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_masks: bool = False, return_offsets_mapping: bool = False, return_lengths: bool = False, **kwargs) → transformers.tokenization_utils.BatchEncoding[source]¶ Returns a dictionary containing the encoded sequence or sequence pair and additional information: the mask for sequence classification and the overflowing elements if a
max_length
is specified.- :param batch_text_or_text_pairs (
List[str]
,List[Tuple[str, str]]
,:List[List[str]]
,List[Tuple[List[str], List[str]]]
, and for not-fast tokenizers, also:
List[List[int]]
,List[Tuple[List[int], List[int]]]
):Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see details in encode_plus)
- Parameters
add_special_tokens (
bool
, optional, defaults toTrue
) – If set toTrue
, the sequences will be encoded with the special tokens relative to their model.max_length (
int
, optional, defaults toNone
) – If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionarystride (
int
, optional, defaults to0
) – If set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.truncation_strategy (
str
, optional, defaults to longest_first) –String selected in the following options:
’longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length starting from the longest one at each token (when there is a pair of input sequences)
’only_first’: Only truncate the first sequence
’only_second’: Only truncate the second sequence
’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)
pad_to_max_length (
bool
, optional, defaults toFalse
) –If set to True, the returned sequences will be padded according to the model’s padding side and padding index, up to their max length. If no max length is specified, the padding is done up to the model’s max length. The tokenizer padding sides are handled by the class attribute padding_side which can be set to the following strings:
’left’: pads on the left of the sequences
’right’: pads on the right of the sequences
Defaults to False: no padding.
is_pretokenized (
bool
, defaults toFalse
) – Set to True to indicate the input is already tokenizedreturn_tensors (
str
, optional, defaults toNone
) – Can be set to ‘tf’ or ‘pt’ to return respectively TensorFlowtf.constant
or PyTorchtorch.Tensor
instead of a list of python integers.return_token_type_ids (
bool
, optional, defaults toNone
) –Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_attention_masks (
bool
, optional, defaults tonone
) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) – Set to True to return overflowing token information (default False).return_special_tokens_masks (
bool
, optional, defaults toFalse
) – Set to True to return special tokens mask information (default False).return_offsets_mapping (
bool
, optional, defaults toFalse
) – Set to True to return (char_start, char_end) for each token (default False). If using Python’s tokenizer, this method will raise NotImplementedError. This one is only available on Rust-based tokenizers inheriting from PreTrainedTokenizerFast.return_lengths (
bool
, optional, defaults toFalse
) – If set the resulting dictionary will include the length of each encoded inputs**kwargs – passed to the self.tokenize() method
- Returns
A Dictionary of shape:
{ input_ids: list[List[int]], token_type_ids: list[List[int]] if return_token_type_ids is True (default) attention_mask: list[List[int]] if return_attention_mask is True (default) overflowing_tokens: list[List[int]] if a ``max_length`` is specified and return_overflowing_tokens is True num_truncated_tokens: List[int] if a ``max_length`` is specified and return_overflowing_tokens is True special_tokens_mask: list[List[int]] if ``add_special_tokens`` if set to ``True`` and return_special_tokens_mask is True }
With the fields:
input_ids
: list of token ids to be fed to a modeltoken_type_ids
: list of token type ids to be fed to a modelattention_mask
: list of indices specifying which tokens should be attended to by the modeloverflowing_tokens
: list of overflowing tokens if a max length is specified.num_truncated_tokens
: number of overflowing tokens amax_length
is specifiedspecial_tokens_mask
: if adding special tokens, this is a list of [0, 1], with 0 specifying special added tokens and 1 specifying sequence tokens.
- :param batch_text_or_text_pairs (
-
build_inputs_with_special_tokens
(token_ids_0: List, token_ids_1: Optional[List] = None) → List[source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. This implementation does not add special tokens.
-
static
clean_up_tokenization
(out_string: str) → str[source]¶ Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms.
-
convert_ids_to_tokens
(ids: Union[int, List[int]], skip_special_tokens: bool = False) → Union[int, List[int]][source]¶ Converts a single index or a sequence of indices (integers) in a token ” (resp.) a sequence of tokens (str), using the vocabulary and added tokens.
- Parameters
skip_special_tokens – Don’t decode special tokens (self.all_special_tokens). Default: False
-
convert_tokens_to_ids
(tokens)[source]¶ Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
-
convert_tokens_to_string
(tokens: List[str]) → str[source]¶ Converts a sequence of tokens (string) in a single string. The most simple way to do it is ‘ ‘.join(self.convert_ids_to_tokens(token_ids)) but we often want to remove sub-word tokenization artifacts at the same time.
-
decode
(token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True) → str[source]¶ Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing
self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.- Parameters
token_ids – list of tokenized input ids. Can be obtained using the encode or encode_plus methods.
skip_special_tokens – if set to True, will replace special tokens.
clean_up_tokenization_spaces – if set to True, will clean up the tokenization spaces.
-
encode
(text: Union[str, List[str], List[int]], text_pair: Optional[Union[str, List[str], List[int]]] = None, add_special_tokens: bool = True, max_length: Optional[int] = None, stride: int = 0, truncation_strategy: str = 'longest_first', pad_to_max_length: bool = False, return_tensors: Optional[str] = None, **kwargs)[source]¶ Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing
self.convert_tokens_to_ids(self.tokenize(text))
.- Parameters
text (
str
,List[str]
orList[int]
) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)text_pair (
str
,List[str]
orList[int]
, optional, defaults toNone
) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)add_special_tokens (
bool
, optional, defaults toTrue
) – If set toTrue
, the sequences will be encoded with the special tokens relative to their model.max_length (
int
, optional, defaults toNone
) – If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary. You can set it to the maximal input size of the model with max_length = tokenizer.model_max_length.stride (
int
, optional, defaults to0
) – If set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.truncation_strategy (
str
, optional, defaults to longest_first) –String selected in the following options:
’longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length starting from the longest one at each token (when there is a pair of input sequences)
’only_first’: Only truncate the first sequence
’only_second’: Only truncate the second sequence
’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)
pad_to_max_length (
bool
, optional, defaults toFalse
) –If set to True, the returned sequences will be padded according to the model’s padding side and padding index, up to their max length. If no max length is specified, the padding is done up to the model’s max length. The tokenizer padding sides are handled by the class attribute padding_side which can be set to the following strings:
’left’: pads on the left of the sequences
’right’: pads on the right of the sequences
Defaults to False: no padding.
return_tensors (
str
, optional, defaults toNone
) – Can be set to ‘tf’ or ‘pt’ to return respectively TensorFlowtf.constant
or PyTorchtorch.Tensor
instead of a list of python integers.**kwargs – passed to the self.tokenize() method
-
encode_plus
(text: Union[str, List[str], List[int]], text_pair: Optional[Union[str, List[str], List[int]]] = None, add_special_tokens: bool = True, max_length: Optional[int] = None, stride: int = 0, truncation_strategy: str = 'longest_first', pad_to_max_length: bool = False, is_pretokenized: bool = False, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, **kwargs) → transformers.tokenization_utils.BatchEncoding[source]¶ Returns a dictionary containing the encoded sequence or sequence pair and additional information: the mask for sequence classification and the overflowing elements if a
max_length
is specified.- Parameters
text (
str
,List[str]
orList[int]
(the later only for not-fast tokenizers)) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)text_pair (
str
,List[str]
orList[int]
, optional, defaults toNone
) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)add_special_tokens (
bool
, optional, defaults toTrue
) – If set toTrue
, the sequences will be encoded with the special tokens relative to their model.max_length (
int
, optional, defaults toNone
) – If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary You can set it to the maximal input size of the model with max_length = tokenizer.model_max_length.stride (
int
, optional, defaults to0
) – If set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.truncation_strategy (
str
, optional, defaults to longest_first) –String selected in the following options:
’longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length starting from the longest one at each token (when there is a pair of input sequences)
’only_first’: Only truncate the first sequence
’only_second’: Only truncate the second sequence
’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)
pad_to_max_length (
bool
, optional, defaults toFalse
) –If set to True, the returned sequences will be padded according to the model’s padding side and padding index, up to their max length. If no max length is specified, the padding is done up to the model’s max length. The tokenizer padding sides are handled by the class attribute padding_side which can be set to the following strings:
’left’: pads on the left of the sequences
’right’: pads on the right of the sequences
Defaults to False: no padding.
is_pretokenized (
bool
, defaults toFalse
) – Set to True to indicate the input is already tokenizedreturn_tensors (
str
, optional, defaults toNone
) – Can be set to ‘tf’ or ‘pt’ to return respectively TensorFlowtf.constant
or PyTorchtorch.Tensor
instead of a list of python integers.return_token_type_ids (
bool
, optional, defaults toNone
) –Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_attention_mask (
bool
, optional, defaults tonone
) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) – Set to True to return overflowing token information (default False).return_special_tokens_mask (
bool
, optional, defaults toFalse
) – Set to True to return special tokens mask information (default False).return_offsets_mapping (
bool
, optional, defaults toFalse
) – Set to True to return (char_start, char_end) for each token (default False). If using Python’s tokenizer, this method will raise NotImplementedError. This one is only available on fast tokenizers inheriting from PreTrainedTokenizerFast.**kwargs – passed to the self.tokenize() method
- Returns
A Dictionary of shape:
{ input_ids: list[int], token_type_ids: list[int] if return_token_type_ids is True (default) attention_mask: list[int] if return_attention_mask is True (default) overflowing_tokens: list[int] if a ``max_length`` is specified and return_overflowing_tokens is True num_truncated_tokens: int if a ``max_length`` is specified and return_overflowing_tokens is True special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True`` and return_special_tokens_mask is True }
With the fields:
input_ids
: list of token ids to be fed to a modeltoken_type_ids
: list of token type ids to be fed to a modelattention_mask
: list of indices specifying which tokens should be attended to by the modeloverflowing_tokens
: list of overflowing tokens if a max length is specified.num_truncated_tokens
: number of overflowing tokens amax_length
is specifiedspecial_tokens_mask
: if adding special tokens, this is a list of [0, 1], with 0 specifying special added tokens and 1 specifying sequence tokens.
-
classmethod
from_pretrained
(*inputs, **kwargs)[source]¶ Instantiate a
PreTrainedTokenizer
(or a derived class) from a predefined tokenizer.- Parameters
pretrained_model_name_or_path –
either:
a string with the shortcut name of a predefined tokenizer to load from cache or download, e.g.:
bert-base-uncased
.a string with the identifier name of a predefined tokenizer that was user-uploaded to our S3, e.g.:
dbmdz/bert-base-german-cased
.a path to a directory containing vocabulary files required by the tokenizer, for instance saved using the
save_pretrained()
method, e.g.:./my_model_directory/
.(not applicable to all derived classes, deprecated) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.:
./my_model_directory/vocab.txt
.
cache_dir – (optional) string: Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.
force_download – (optional) boolean, default False: Force to (re-)download the vocabulary files and override the cached versions if they exists.
resume_download – (optional) boolean, default False: Do not delete incompletely recieved file. Attempt to resume the download if such a file exists.
proxies – (optional) dict, default None: A dictionary of proxy servers to use by protocol or endpoint, e.g.: {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.
inputs – (optional) positional arguments: will be passed to the Tokenizer
__init__
method.kwargs – (optional) keyword arguments: will be passed to the Tokenizer
__init__
method. Can be used to set special tokens likebos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
. See parameters in the doc string ofPreTrainedTokenizer
for details.
Examples:
# We can't instantiate directly the base class `PreTrainedTokenizer` so let's show our examples on a derived class: BertTokenizer # Download vocabulary from S3 and cache. tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Download vocabulary from S3 (user-uploaded) and cache. tokenizer = BertTokenizer.from_pretrained('dbmdz/bert-base-german-cased') # If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`) tokenizer = BertTokenizer.from_pretrained('./test/saved_model/') # If the tokenizer uses a single vocabulary file, you can point directly to this file tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt') # You can link tokens to special vocabulary when instantiating tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', unk_token='<unk>') # You should be sure '<unk>' is in the vocabulary when doing that. # Otherwise use tokenizer.add_special_tokens({'unk_token': '<unk>'}) instead) assert tokenizer.unk_token == '<unk>'
-
get_special_tokens_mask
(token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False) → List[int][source]¶ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
prepare_for_model
orencode_plus
methods.- Parameters
token_ids_0 – list of ids (must not contain special tokens)
token_ids_1 – Optional list of ids (must not contain special tokens), necessary when fetching sequence ids for sequence pairs
already_has_special_tokens – (default False) Set to True if the token list is already formated with special tokens for the model
- Returns
1 for a special token, 0 for a sequence token.
- Return type
A list of integers in the range [0, 1]
-
get_vocab
()[source]¶ Returns the vocabulary as a dict of {token: index} pairs. tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.
-
property
max_len
¶ Kept here for backward compatibility. Now renamed to model_max_length to avoid ambiguity.
-
num_special_tokens_to_add
(pair=False)[source]¶ Returns the number of added tokens when encoding a sequence with special tokens.
Note
This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
- Parameters
pair – Returns the number of added tokens in the case of a sequence pair if set to True, returns the number of added tokens in the case of a single sequence if set to False.
- Returns
Number of tokens added to sequences
-
prepare_for_model
(ids: List[int], pair_ids: Optional[List[int]] = None, max_length: Optional[int] = None, add_special_tokens: bool = True, stride: int = 0, truncation_strategy: str = 'longest_first', pad_to_max_length: bool = False, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_lengths: bool = False) → transformers.tokenization_utils.BatchEncoding[source]¶ Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens
- Parameters
ids – list of tokenized input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
pair_ids – Optional second list of input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
max_length – maximum length of the returned list. Will truncate by taking into account the special tokens.
add_special_tokens – if set to
True
, the sequences will be encoded with the special tokens relative to their model.stride – window stride for overflowing tokens. Can be useful to remove edge effect when using sequential list of inputs. The overflowing token will contains a part of the previous window of tokens.
truncation_strategy –
string selected in the following options: - ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length
starting from the longest one at each token (when there is a pair of input sequences)
’only_first’: Only truncate the first sequence
’only_second’: Only truncate the second sequence
’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)
pad_to_max_length – if set to True, the returned sequences will be padded according to the model’s padding side and padding index, up to their max length. If no max length is specified, the padding is done up to the model’s max length. The tokenizer padding sides are handled by the following strings: - ‘left’: pads on the left of the sequences - ‘right’: pads on the right of the sequences Defaults to False: no padding.
return_tensors – (optional) can be set to ‘tf’ or ‘pt’ to return respectively TensorFlow tf.constant or PyTorch torch.Tensor instead of a list of python integers.
return_token_type_ids – (optional) Set to False to avoid returning token_type_ids (default: set to model specifics).
return_attention_mask – (optional) Set to False to avoid returning attention mask (default: set to model specifics)
return_overflowing_tokens – (optional) Set to True to return overflowing token information (default False).
return_special_tokens_mask – (optional) Set to True to return special tokens mask information (default False).
return_lengths (
bool
, optional, defaults toFalse
) – If set the resulting dictionary will include the length of each encoded inputs
- Returns
A Dictionary of shape:
{ input_ids: list[int], token_type_ids: list[int] if return_token_type_ids is True (default) overflowing_tokens: list[int] if a ``max_length`` is specified and return_overflowing_tokens is True num_truncated_tokens: int if a ``max_length`` is specified and return_overflowing_tokens is True special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True`` and return_special_tokens_mask is True length: int if return_lengths is True }
- With the fields:
input_ids
: list of token ids to be fed to a modeltoken_type_ids
: list of token type ids to be fed to a modeloverflowing_tokens
: list of overflowing tokens if a max length is specified.num_truncated_tokens
: number of overflowing tokens amax_length
is specifiedspecial_tokens_mask
: if adding special tokens, this is a list of [0, 1], with 0 specifying special addedtokens and 1 specifying sequence tokens.
length
: this is the length ofinput_ids
-
prepare_for_tokenization
(text: str, **kwargs) → str[source]¶ Performs any necessary transformations before tokenization
-
save_pretrained
(save_directory)[source]¶ - Save the tokenizer vocabulary files together with:
added tokens,
special-tokens-to-class-attributes-mapping,
tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert).
Warning: This won’t save modifications you may have applied to the tokenizer after the instantiation (e.g. modifying tokenizer.do_lower_case after creation).
This method make sure the full tokenizer can then be re-loaded using the
from_pretrained()
class method.
-
save_vocabulary
(save_directory) → Tuple[str][source]¶ Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings.
Please use
save_pretrained()
() to save the full Tokenizer state if you want to reload it using thefrom_pretrained()
class method.
-
tokenize
(text: str, **kwargs)[source]¶ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
Take care of added tokens.
- Parameters
text (
string
) – The sequence to be encoded.( (**kwargs) – obj: dict): Arguments passed to the model-specific prepare_for_tokenization preprocessing method.
-
truncate_sequences
(ids: List[int], pair_ids: Optional[List[int]] = None, num_tokens_to_remove: int = 0, truncation_strategy: str = 'longest_first', stride: int = 0) → Tuple[List[int], List[int], List[int]][source]¶ Truncates a sequence pair in place to the maximum length.
- Parameters
ids – list of tokenized input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
pair_ids – Optional second list of input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
num_tokens_to_remove (
int
, optional, defaults to0
) – number of tokens to remove using the truncation strategytruncation_strategy –
string selected in the following options: - ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length
starting from the longest one at each token (when there is a pair of input sequences). Overflowing tokens only contains overflow from the first sequence.
’only_first’: Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove.
’only_second’: Only truncate the second sequence
’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)
stride (
int
, optional, defaults to0
) – If set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
-
property
vocab_size
¶ Size of the base vocabulary (without the added tokens)