데이터 콜레이터(Data Collator)
데이터 콜레이터는 데이터셋 요소들의 리스트를 입력으로 사용하여 배치를 형성하는 객체입니다. 이러한 요소들은 train_dataset
또는 eval_dataset의
요소들과 동일한 타입 입니다. 배치를 구성하기 위해, 데이터 콜레이터는 (패딩과 같은) 일부 처리를 적용할 수 있습니다. DataCollatorForLanguageModeling과 같은 일부 콜레이터는 형성된 배치에 (무작위 마스킹과 같은) 일부 무작위 데이터 증강도 적용합니다. 사용 예시는 예제 스크립트나 예제 노트북에서 찾을 수 있습니다.
기본 데이터 콜레이터
transformers.default_data_collator
< source >( features: typing.List[transformers.data.data_collator.InputDataClass] return_tensors = 'pt' )
Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:
label
: handles a single value (int or float) per objectlabel_ids
: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful.
DefaultDataCollator
class transformers.DefaultDataCollator
< source >( return_tensors: str = 'pt' )
Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named:
label
: handles a single value (int or float) per objectlabel_ids
: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful.
This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization.
DataCollatorWithPadding
class transformers.DataCollatorWithPadding
< source >( tokenizer: PreTrainedTokenizerBase padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None return_tensors: str = 'pt' )
Parameters
- tokenizer (
PreTrainedTokenizer
orPreTrainedTokenizerFast
) — The tokenizer used for encoding the data. - padding (
bool
,str
or PaddingStrategy, optional, defaults toTrue
) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:True
or'longest'
(default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
: No padding (i.e., can output a batch with sequences of different lengths).
- max_length (
int
, optional) — Maximum length of the returned list and optionally padding length (see above). - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
- return_tensors (
str
, optional, defaults to"pt"
) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”.
Data collator that will dynamically pad the inputs received.
DataCollatorForTokenClassification
class transformers.DataCollatorForTokenClassification
< source >( tokenizer: PreTrainedTokenizerBase padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None label_pad_token_id: int = -100 return_tensors: str = 'pt' )
Parameters
- tokenizer (
PreTrainedTokenizer
orPreTrainedTokenizerFast
) — The tokenizer used for encoding the data. - padding (
bool
,str
or PaddingStrategy, optional, defaults toTrue
) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:True
or'longest'
(default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
: No padding (i.e., can output a batch with sequences of different lengths).
- max_length (
int
, optional) — Maximum length of the returned list and optionally padding length (see above). - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
- label_pad_token_id (
int
, optional, defaults to -100) — The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions). - return_tensors (
str
, optional, defaults to"pt"
) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”.
Data collator that will dynamically pad the inputs received, as well as the labels.
DataCollatorForSeq2Seq
class transformers.DataCollatorForSeq2Seq
< source >( tokenizer: PreTrainedTokenizerBase model: typing.Optional[typing.Any] = None padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None label_pad_token_id: int = -100 return_tensors: str = 'pt' )
Parameters
- tokenizer (
PreTrainedTokenizer
orPreTrainedTokenizerFast
) — The tokenizer used for encoding the data. - model (PreTrainedModel, optional) —
The model that is being trained. If set and has the prepare_decoder_input_ids_from_labels, use it to
prepare the decoder_input_ids
This is useful when using label_smoothing to avoid calculating loss twice.
- padding (
bool
,str
or PaddingStrategy, optional, defaults toTrue
) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:True
or'longest'
(default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
: No padding (i.e., can output a batch with sequences of different lengths).
- max_length (
int
, optional) — Maximum length of the returned list and optionally padding length (see above). - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
- label_pad_token_id (
int
, optional, defaults to -100) — The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions). - return_tensors (
str
, optional, defaults to"pt"
) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”.
Data collator that will dynamically pad the inputs received, as well as the labels.
DataCollatorForLanguageModeling
class transformers.DataCollatorForLanguageModeling
< source >( tokenizer: PreTrainedTokenizerBase mlm: bool = True mlm_probability: float = 0.15 pad_to_multiple_of: typing.Optional[int] = None tf_experimental_compile: bool = False return_tensors: str = 'pt' )
Parameters
- tokenizer (
PreTrainedTokenizer
orPreTrainedTokenizerFast
) — The tokenizer used for encoding the data. - mlm (
bool
, optional, defaults toTrue
) — Whether or not to use masked language modeling. If set toFalse
, the labels are the same as the inputs with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for non-masked tokens and the value to predict for the masked token. - mlm_probability (
float
, optional, defaults to 0.15) — The probability with which to (randomly) mask tokens in the input, whenmlm
is set toTrue
. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. - return_tensors (
str
) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”.
Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length.
For best performance, this data collator should be used with a dataset having items that are dictionaries or
BatchEncoding, with the "special_tokens_mask"
key, as returned by a PreTrainedTokenizer
or a
PreTrainedTokenizerFast
with the argument return_special_tokens_mask=True
.
numpy_mask_tokens
< source >( inputs: typing.Any special_tokens_mask: typing.Optional[typing.Any] = None )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
tf_mask_tokens
< source >( inputs: typing.Any vocab_size mask_token_id special_tokens_mask: typing.Optional[typing.Any] = None )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
torch_mask_tokens
< source >( inputs: typing.Any special_tokens_mask: typing.Optional[typing.Any] = None )
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
DataCollatorForWholeWordMask
class transformers.DataCollatorForWholeWordMask
< source >( tokenizer: PreTrainedTokenizerBase mlm: bool = True mlm_probability: float = 0.15 pad_to_multiple_of: typing.Optional[int] = None tf_experimental_compile: bool = False return_tensors: str = 'pt' )
Data collator used for language modeling that masks entire words.
- collates batches of tensors, honoring their tokenizer’s pad_token
- preprocesses batches for masked language modeling
This collator relies on details of the implementation of subword tokenization by BertTokenizer, specifically
that subword tokens are prefixed with ##. For tokenizers that do not adhere to this scheme, this collator will
produce an output that is roughly equivalent to .DataCollatorForLanguageModeling
.
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref.
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref.
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref.
DataCollatorForPermutationLanguageModeling
class transformers.DataCollatorForPermutationLanguageModeling
< source >( tokenizer: PreTrainedTokenizerBase plm_probability: float = 0.16666666666666666 max_span_length: int = 5 return_tensors: str = 'pt' )
Data collator used for permutation language modeling.
- collates batches of tensors, honoring their tokenizer’s pad_token
- preprocesses batches for permutation language modeling with procedures specific to XLNet
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
- Start from the beginning of the sequence by setting
cur_len = 0
(number of tokens processed so far). - Sample a
span_length
from the interval[1, max_span_length]
(length of span of tokens to be masked) - Reserve a context of length
context_length = span_length / plm_probability
to surround span to be masked - Sample a starting point
start_index
from the interval[cur_len, cur_len + context_length - span_length]
and mask tokensstart_index:start_index + span_length
- Set
cur_len = cur_len + context_length
. Ifcur_len < max_len
(i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
- Start from the beginning of the sequence by setting
cur_len = 0
(number of tokens processed so far). - Sample a
span_length
from the interval[1, max_span_length]
(length of span of tokens to be masked) - Reserve a context of length
context_length = span_length / plm_probability
to surround span to be masked - Sample a starting point
start_index
from the interval[cur_len, cur_len + context_length - span_length]
and mask tokensstart_index:start_index + span_length
- Set
cur_len = cur_len + context_length
. Ifcur_len < max_len
(i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
The masked tokens to be predicted for a particular sequence are determined by the following algorithm:
- Start from the beginning of the sequence by setting
cur_len = 0
(number of tokens processed so far). - Sample a
span_length
from the interval[1, max_span_length]
(length of span of tokens to be masked) - Reserve a context of length
context_length = span_length / plm_probability
to surround span to be masked - Sample a starting point
start_index
from the interval[cur_len, cur_len + context_length - span_length]
and mask tokensstart_index:start_index + span_length
- Set
cur_len = cur_len + context_length
. Ifcur_len < max_len
(i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
DataCollatorWithFlatteningtransformers.DataCollatorWithFlattening
class transformers.DataCollatorWithFlattening
< source >( *args return_position_ids = True separator_id = -100 **kwargs )
Data collator used for padding free approach. Does the following:
- concatate the entire mini batch into single long sequence [1, total_tokens]
- uses
separator_id
to separate sequences within the concatenatedlabels
, default value is -100 - no padding will be added, returns
input_ids
,labels
andposition_ids