Text Generation
Transformers
PyTorch
Safetensors
French
pagnolxl
pagnol
custom_code
wissamantoun commited on
Commit
ea4fdbf
1 Parent(s): 08a5717

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - ccnet-fr
5
+ language:
6
+ - fr
7
+ tags:
8
+ - pagnol
9
+ ---
10
+
11
+ # PAGnol: An Extra-Large French Generative Model
12
+
13
+ Paper: [ARXIV](https://arxiv.org/abs/2110.08554), [ACL ANTHOLOGY](https://aclanthology.org/2022.lrec-1.455/)
14
+
15
+ Code: [GITHUB](https://github.com/lightonai/lairgpt)
16
+
17
+ PAGnol is a collection of large French language models, geared towards free-form text generation. With 1.5 billion parameters. PAGnol is based on the [GPT](https://arxiv.org/abs/2005.14165) architecture. PAGnol is the first language model trained by [LightOn](https://lighton.ai/), in cooperation with the [ALMAnaCH team of Inria](http://almanach.inria.fr/index-en.html).
18
+
19
+ These model were trained in early 2021 following the then [scaling laws](https://arxiv.org/abs/2001.08361) and using the exact same training data as the [CamemBERT](https://camembert-model.fr/) model trained on [CCNet](https://github.com/facebookresearch/cc_net). We make it available for reproducibility and transparency purposes.
20
+ They do not constitute the current state of the art nor are they aiming at it.
21
+
22
+ PAGnol was built by [Julien Launay](https://lolo.science/), E.L. Tommasone, [Baptiste Pannier](https://www.linkedin.com/in/baptiste-pannier-b30758154/), [François Boniface](https://www.linkedin.com/in/fran%c3%a7ois-boniface-26313610b/), [Amélie Chatelain](https://www.instagram.com/amelietabatta/), [Iacopo Poli](https://twitter.com/iacopo_poli), and [Djamé Seddah](http://pauillac.inria.fr/~seddah/). It is named after Marcel Pagnol (with PAG standing for pré-apprentissage génératif), and was trained on the IDRIS Jean Zay supercomputer thanks to a GENCI allocation.
23
+
24
+ The model was converted to the Hugging Face format by [Wissam Antoun](https://wissamantoun.com) ([ALMAnaCH](http://almanach.inria.fr/index-en.html)'s PhD student, co-supervised by [Benoît Sagot](https://pauillac.inria.fr/~sagot/) and [Djamé Seddah](http://pauillac.inria.fr/~seddah/))
25
+
26
+ # Usage
27
+
28
+ ### Using PAGnol with Huggingface
29
+ ```python
30
+ from transformers import pipeline
31
+
32
+ generator = pipeline('text-generation', model='lightonai/pagnol-xl', trust_remote_code=True)
33
+
34
+ output = generator(
35
+ "Salut PAGnol, comment ça va ?",
36
+ max_length=50,
37
+ do_sample=True,
38
+ temperature=0.7,
39
+ )[0]["generated_text"]
40
+
41
+ >>> "Très bien! Les jours d’été sont là ! Bientôt les premiers festivals..."
42
+ ```
43
+
44
+ # License
45
+ PAGnol is made available under the MIT licence: by downloading the models available below, you agree with the terms of the MIT licence agreement. Under no circumstances will LightOn and/or Inria be held responsible or liable in any way for any claims, damages, losses, expenses, costs or liabilities whatsoever (including, without limitation, any direct or indirect damages for loss of profits, business interruption or loss of information) resulting or arising directly or indirectly from your use of or inability to use PAGnol.
46
+
47
+ # Available Models
48
+ - [`lightonai/pagnol-small`](https://huggingface.co/lightonai/pagnol-small): 125M parameters
49
+ - [`lightonai/pagnol-medium`](https://huggingface.co/lightonai/pagnol-medium): 355M parameters
50
+ - [`lightonai/pagnol-large`](https://huggingface.co/lightonai/pagnol-large): 773M parameters
51
+ - [`lightonai/pagnol-xl`](https://huggingface.co/lightonai/pagnol-xl): 1.5B parameters
52
+
53
+ # Citation
54
+ ```
55
+ @inproceedings{launay-etal-2022-pagnol,
56
+ title = "{PAG}nol: An Extra-Large {F}rench Generative Model",
57
+ author = "Launay, Julien and
58
+ Tommasone, E.l. and
59
+ Pannier, Baptiste and
60
+ Boniface, Fran{\c{c}}ois and
61
+ Chatelain, Am{\'e}lie and
62
+ Cappelli, Alessandro and
63
+ Poli, Iacopo and
64
+ Seddah, Djam{\'e}",
65
+ editor = "Calzolari, Nicoletta and
66
+ B{\'e}chet, Fr{\'e}d{\'e}ric and
67
+ Blache, Philippe and
68
+ Choukri, Khalid and
69
+ Cieri, Christopher and
70
+ Declerck, Thierry and
71
+ Goggi, Sara and
72
+ Isahara, Hitoshi and
73
+ Maegaard, Bente and
74
+ Mariani, Joseph and
75
+ Mazo, H{\'e}l{\`e}ne and
76
+ Odijk, Jan and
77
+ Piperidis, Stelios",
78
+ booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
79
+ month = jun,
80
+ year = "2022",
81
+ address = "Marseille, France",
82
+ publisher = "European Language Resources Association",
83
+ url = "https://aclanthology.org/2022.lrec-1.455",
84
+ pages = "4275--4284",
85
+ }
86
+ ```
87
+ # Contact
88
+ For research enquiries: [email protected]
89
+ For business enquiries: [email protected]
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "PagnoXL",
3
+ "activation_function": "gelu",
4
+ "architectures": [
5
+ "PagnolXlForCausalLM"
6
+ ],
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_pagnolxl.PagnolXlConfig",
9
+ "AutoModelForCausalLM": "modeling_pagnolxl.PagnolXlForCausalLM",
10
+ "AutoModel": "modeling_pagnolxl.PagnolXlModel"
11
+ },
12
+ "bos_token_id": 1,
13
+ "d_feedforward": 6400,
14
+ "d_model": 1600,
15
+ "dropout": 0.1,
16
+ "eos_token_id": 1,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "max_seq_len": 2048,
19
+ "model_type": "pagnolxl",
20
+ "n_heads": 25,
21
+ "n_layers": 48,
22
+ "sigma": 0.02,
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.38.2",
25
+ "use_cache": true,
26
+ "vocab_size": 50262
27
+ }
configuration_pagnolxl.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # TODO: Add license
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ PagnolXl configuration"""
16
+ from transformers.configuration_utils import PretrainedConfig
17
+ from transformers.utils import logging
18
+
19
+ logger = logging.get_logger(__name__)
20
+
21
+
22
+ class PagnolXlConfig(PretrainedConfig):
23
+ r"""
24
+ This is the configuration class to store the configuration of a [`PagnolXlModel`]. It is used to instantiate a PagnolXl
25
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
26
+ defaults will yield a similar configuration to that of the [PagnolXl]() architecture.
27
+
28
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
29
+ documentation from [`PretrainedConfig`] for more information.
30
+
31
+
32
+ Args:
33
+ vocab_size (`int`, *optional*, defaults to 65024):
34
+ Vocabulary size of the PagnolXl model. Defines the number of different tokens that can be represented by the
35
+ `inputs_ids` passed when calling [`PagnolXlModel`]
36
+ d_model (`int`, *optional*, defaults to 4544):
37
+ Dimension of the hidden representations.
38
+ num_hidden_layers (`int`, *optional*, defaults to 32):
39
+ Number of hidden layers in the Transformer decoder.
40
+ n_heads (`int`, *optional*, defaults to 71):
41
+ Number of attention heads for each attention layer in the Transformer encoder.
42
+ sigma (`float`, *optional*, defaults to 0.02):
43
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
44
+ use_cache (`bool`, *optional*, defaults to `True`):
45
+ Whether the model should return the last key/values attentions (not used by all models). Only relevant if
46
+ `config.is_decoder=True`.
47
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
48
+ The epsilon used by the layer normalization layers.
49
+ dropout (`float`, *optional*, defaults to 0.0):
50
+ The dropout probability for MLP layers.
51
+ bos_token_id (`int`, *optional*, defaults to 11):
52
+ The id of the "beginning-of-sequence" token.
53
+ eos_token_id (`int`, *optional*, defaults to 11):
54
+ The id of the "end-of-sequence" token.
55
+
56
+ Example:
57
+
58
+ ```python
59
+ >>> from transformers import PagnolXlModel, PagnolXlConfig
60
+
61
+ >>> # Initializing a small (2-layer) PagnolXl configuration
62
+ >>> configuration = PagnolXlConfig(num_hidden_layers=2)
63
+
64
+ >>> # Initializing a model from the small configuration
65
+ >>> model = PagnolXlModel(configuration)
66
+
67
+ >>> # Accessing the model configuration
68
+ >>> configuration = model.config
69
+ ```"""
70
+
71
+ model_type = "pagnolxl"
72
+ keys_to_ignore_at_inference = ["past_key_values"]
73
+
74
+ def __init__(
75
+ self,
76
+ vocab_size=65024,
77
+ activation_function="gelu",
78
+ d_model=4544,
79
+ d_feedforward=18176,
80
+ n_heads=71,
81
+ n_layers=32,
82
+ layer_norm_epsilon=1e-5,
83
+ sigma=0.02,
84
+ use_cache=True,
85
+ dropout=0.0,
86
+ bos_token_id=11,
87
+ eos_token_id=11,
88
+ **kwargs,
89
+ ):
90
+ self.vocab_size = vocab_size
91
+ # Backward compatibility with n_embed kwarg
92
+ n_embed = kwargs.pop("n_embed", None)
93
+ self.activation_function = activation_function
94
+ self.d_model = d_model if n_embed is None else n_embed
95
+ self.d_feedforward = d_feedforward
96
+ self.n_heads = n_heads
97
+ self.n_layers = n_layers
98
+ self.layer_norm_epsilon = layer_norm_epsilon
99
+ self.sigma = sigma
100
+ self.use_cache = use_cache
101
+ self.dropout = dropout
102
+ self.bos_token_id = bos_token_id
103
+ self.eos_token_id = eos_token_id
104
+
105
+ super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
106
+
107
+ @property
108
+ def head_dim(self):
109
+ return self.d_model // self.n_heads
modeling_pagnolxl.py ADDED
@@ -0,0 +1,825 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # TODO: Add license
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """PyTorch PagnolXl model."""
16
+
17
+ import math
18
+ from typing import Optional, Tuple, Union
19
+
20
+ import torch
21
+ import torch.utils.checkpoint
22
+ from torch import nn
23
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, LayerNorm, MSELoss
24
+ from torch.nn import functional as F
25
+ from transformers.activations import ACT2FN
26
+ from transformers.modeling_outputs import (
27
+ BaseModelOutputWithPastAndCrossAttentions,
28
+ CausalLMOutputWithCrossAttentions,
29
+ QuestionAnsweringModelOutput,
30
+ SequenceClassifierOutputWithPast,
31
+ TokenClassifierOutput,
32
+ )
33
+ from transformers.modeling_utils import PreTrainedModel
34
+ from transformers.utils import (
35
+ add_code_sample_docstrings,
36
+ add_start_docstrings,
37
+ add_start_docstrings_to_model_forward,
38
+ logging,
39
+ )
40
+
41
+ from .configuration_pagnolxl import PagnolXlConfig
42
+
43
+ logger = logging.get_logger(__name__)
44
+
45
+ PAGNOLXL_PRETRAINED_MODEL_ARCHIVE_LIST = [
46
+ "XXXX/pagnol-xl",
47
+ ]
48
+
49
+ _CHECKPOINT_FOR_DOC = "XXXX/pagnol-xl"
50
+ _CONFIG_FOR_DOC = "PagnolXlConfig"
51
+
52
+
53
+ class PagnolXlEmbeddings(nn.Module):
54
+ """Implementation of the PagnolXl Embedding layer.
55
+
56
+ Parameters
57
+ ----------
58
+ vocab_size: int,
59
+ size of the vocabulary.
60
+ d_model: int,
61
+ Dimension of the hidden representations.
62
+ sigma: int, default 0.02,
63
+ standard deviation for the Gaussian initialization of the embedding weights.
64
+ """
65
+
66
+ def __init__(self, config: PagnolXlConfig):
67
+ super().__init__()
68
+ self.embedding = nn.Embedding(config.vocab_size, config.d_model)
69
+
70
+ def forward(self, input_ids: torch.LongTensor) -> torch.FloatTensor:
71
+ return self.embedding(input_ids)
72
+
73
+
74
+ # rotary pos emb helpers (torch.jit.script does not seem to support staticmethod...)
75
+ def rotate_half(x):
76
+ x1, x2 = x[..., : x.shape[-1] // 2], x[..., x.shape[-1] // 2 :]
77
+ return torch.cat((-x2, x1), dim=-1)
78
+
79
+
80
+ class PagnoXlRotaryEmbeddings(nn.Module):
81
+ """Implementation of RotaryEmbedding from GPT-NeoX and Falcon.
82
+ This implementation is designed to operate on queries and keys that are compatible with `[batch_size,
83
+ n_heads_per_partition, seq_len, head_dim]` (e.g. MinGPTAttention format).
84
+ """
85
+
86
+ def __init__(self, config: PagnolXlConfig):
87
+ super().__init__()
88
+ assert (
89
+ config.d_model % config.n_heads == 0
90
+ ), "d_model must be divisible by n_heads. Currently d_model: {}, n_heads: {}".format(
91
+ config.d_model, config.n_heads
92
+ )
93
+
94
+ self.d_model = config.d_model
95
+ self.n_heads = config.n_heads
96
+ self.head_dim = config.d_model // config.n_heads
97
+ self.base = config.to_dict().get("base", 10000)
98
+ inv_freq = 1.0 / (
99
+ self.base ** (torch.arange(0, self.head_dim, 2).float() / self.head_dim)
100
+ )
101
+ self.register_buffer("inv_freq", inv_freq)
102
+ self.seq_len_cached = -1
103
+ self.cos_cached: torch.Tensor | None = None
104
+ self.sin_cached: torch.Tensor | None = None
105
+
106
+ def cos_sin(
107
+ self,
108
+ seq_len: int,
109
+ past_key_values_length: int,
110
+ device="cpu",
111
+ dtype=torch.bfloat16,
112
+ ) -> torch.Tensor:
113
+ total_length = seq_len + past_key_values_length
114
+ if total_length > self.seq_len_cached:
115
+ self.seq_len_cached = total_length
116
+ t = torch.arange(total_length, device=device, dtype=self.inv_freq.dtype)
117
+ freqs = torch.einsum("i,j->ij", t, self.inv_freq)
118
+ emb = torch.cat((freqs, freqs), dim=-1).to(device)
119
+
120
+ if dtype in [torch.float16, torch.bfloat16]:
121
+ emb = emb.float()
122
+
123
+ self.cos_cached = emb.cos()[None, :, :]
124
+ self.sin_cached = emb.sin()[None, :, :]
125
+
126
+ self.cos_cached = self.cos_cached.type(dtype)
127
+ self.sin_cached = self.sin_cached.type(dtype)
128
+
129
+ return (
130
+ self.cos_cached[
131
+ :, past_key_values_length : seq_len + past_key_values_length
132
+ ],
133
+ self.sin_cached[
134
+ :, past_key_values_length : seq_len + past_key_values_length
135
+ ],
136
+ )
137
+
138
+ def forward(self, query, key, past_key_values_length=0):
139
+ batch, num_heads, seq_len, head_dim = query.shape
140
+ cos, sin = self.cos_sin(
141
+ seq_len, past_key_values_length, query.device, query.dtype
142
+ )
143
+ return (query * cos) + (rotate_half(query) * sin), (key * cos) + (
144
+ rotate_half(key) * sin
145
+ )
146
+
147
+
148
+ def _make_causal_mask(
149
+ input_ids_shape: torch.Size, device: torch.device, past_key_values_length: int
150
+ ) -> torch.BoolTensor:
151
+ """
152
+ Make causal mask used for self-attention. This mask does not take the existing attention mask into account - it
153
+ just blocks tokens from attending forwards in the sequence. The output shape will be `[batch_size, 1,
154
+ target_length, target_length+past_key_values_length]`.
155
+ """
156
+ batch_size, target_length = input_ids_shape
157
+
158
+ mask = torch.triu(
159
+ torch.ones((target_length, target_length), dtype=torch.bool, device=device),
160
+ diagonal=1,
161
+ )
162
+ # If past_key_values_length is 0 this is an empty tensor and the concatenation is a no-op.
163
+ # This code style is an unfortunate consequence of getting your TF engineer to port models; doing it this
164
+ # way avoids a data-dependent conditional, which will help me when I have to port this to XLA later.
165
+ past_mask = torch.zeros(
166
+ (target_length, past_key_values_length), dtype=torch.bool, device=device
167
+ )
168
+ mask = torch.cat([past_mask, mask], dim=-1)
169
+ expanded_mask = mask[None, None, :, :].expand(
170
+ batch_size, 1, target_length, target_length + past_key_values_length
171
+ )
172
+ return expanded_mask
173
+
174
+
175
+ def _expand_mask(mask: torch.Tensor, past_key_values_length: int) -> torch.BoolTensor:
176
+ """
177
+ Expands attention_mask from `[batch_size, seq_length]` to `[batch_size, 1, seq_length, seq_length + past_length]`.
178
+ """
179
+ batch_size, total_length = mask.shape
180
+ seq_length = (
181
+ total_length - past_key_values_length
182
+ if past_key_values_length is not None
183
+ else total_length
184
+ )
185
+
186
+ expanded_mask = ~(mask[:, None, None, :].to(torch.bool))
187
+ return expanded_mask.expand(batch_size, 1, seq_length, total_length)
188
+
189
+
190
+ class PagnolXlAttention(nn.Module):
191
+ """Implementation of Pagnol's MultiHeadAttention following `Karpathy's MinGPT <https://github.com/karpathy/minGPT>`_.
192
+ The internals are easier to modify with respect to the native Pytorch version, however it does not support
193
+ providing padding masks in the forward.
194
+ """
195
+
196
+ def __init__(self, config: PagnolXlConfig):
197
+ super().__init__()
198
+ assert config.d_model % config.n_heads == 0
199
+ self.d_model = config.d_model
200
+ self.n_heads = config.n_heads
201
+ self.dropout = config.dropout
202
+ self.sigma = config.sigma
203
+ self.n_layers = config.n_layers
204
+
205
+ # key, query, value projections for all heads
206
+ self.key = nn.Linear(config.d_model, config.d_model)
207
+ self.query = nn.Linear(config.d_model, config.d_model)
208
+ self.value = nn.Linear(config.d_model, config.d_model)
209
+
210
+ # regularization
211
+ self.attn_drop = nn.Dropout(config.dropout)
212
+ self.resid_drop = nn.Dropout(config.dropout)
213
+
214
+ # output projection
215
+ self.proj = nn.Linear(config.d_model, config.d_model)
216
+
217
+ # causal mask to ensure that attention is only applied to the left in the input sequence
218
+ self.n_heads = config.n_heads
219
+
220
+ self.rotary_embedding = PagnoXlRotaryEmbeddings(config)
221
+
222
+ def init_weights(self):
223
+ # Megatron params
224
+ std = self.sigma / math.sqrt(2.0 * self.n_layers)
225
+ torch.nn.init.normal_(self.key.weight, mean=0.0, std=self.sigma)
226
+ torch.nn.init.normal_(self.query.weight, mean=0.0, std=self.sigma)
227
+ torch.nn.init.normal_(self.value.weight, mean=0.0, std=self.sigma)
228
+
229
+ torch.nn.init.constant_(self.key.bias, 0.0)
230
+ torch.nn.init.constant_(self.query.bias, 0.0)
231
+ torch.nn.init.constant_(self.value.bias, 0.0)
232
+
233
+ torch.nn.init.normal_(self.proj.weight, mean=0.0, std=std)
234
+ torch.nn.init.constant_(self.proj.bias, 0.0)
235
+
236
+ def forward(
237
+ self,
238
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
239
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
240
+ attention_mask: Optional[torch.BoolTensor] = None,
241
+ head_mask: Optional[torch.FloatTensor] = None,
242
+ use_cache: Optional[bool] = False,
243
+ output_attentions: Optional[bool] = False,
244
+ ) -> Tuple[torch.FloatTensor, torch.FloatTensor]:
245
+ N, L, D = hidden_states.size() # Batch_size, Context_size, d_model
246
+ # calculate query, key, values for all heads in batch and move head forward to be the batch dim
247
+ key = (
248
+ self.key(hidden_states)
249
+ .view(N, L, self.n_heads, D // self.n_heads)
250
+ .transpose(1, 2)
251
+ ) # (N, nh, L, hs)
252
+ query = (
253
+ self.query(hidden_states)
254
+ .view(N, L, self.n_heads, D // self.n_heads)
255
+ .transpose(1, 2)
256
+ ) # (N, nh, L, hs)
257
+ value = (
258
+ self.value(hidden_states)
259
+ .view(N, L, self.n_heads, D // self.n_heads)
260
+ .transpose(1, 2)
261
+ ) # (N, nh, L, hs)
262
+
263
+ if self.rotary_embedding is not None:
264
+ past_kv_length = 0 if layer_past is None else layer_past[0].shape[1]
265
+ query, key = self.rotary_embedding(query, key, past_kv_length)
266
+
267
+ if layer_past is not None:
268
+ past_key, past_value = layer_past
269
+ # concatenate along seq_length dimension:
270
+ # - key: [batch_size * self.num_heads, kv_length, head_dim]
271
+ # - value: [batch_size * self.num_heads, kv_length, head_dim]
272
+ key = torch.cat((past_key, key), dim=-2)
273
+ value = torch.cat((past_value, value), dim=-2)
274
+
275
+ if use_cache:
276
+ present = (key, value)
277
+ else:
278
+ present = None
279
+
280
+ # causal self-attention; Self-attend: (N, nh, L, hs) x (N, nh, hs, L) -> (N, nh, L, L)
281
+ attn_output = (query @ key.transpose(-2, -1)) * (1.0 / math.sqrt(key.size(-1)))
282
+ attn_output = (
283
+ attn_output.masked_fill(attention_mask, float("-inf"))
284
+ if attention_mask is not None
285
+ else attn_output
286
+ )
287
+ attn_output = F.softmax(attn_output, dim=-1)
288
+
289
+ attn_output = self.attn_drop(attn_output)
290
+
291
+ # Mask heads if we want to
292
+ if head_mask is not None:
293
+ attn_output = attn_output * head_mask
294
+
295
+ outputs = (
296
+ attn_output @ value
297
+ ) # (N, nh, L, L) x (N, nh, L, hs) -> (N, nh, L, hs)
298
+ outputs = (
299
+ outputs.transpose(1, 2).contiguous().view(N, L, D)
300
+ ) # re-assemble all head outputs side by side
301
+
302
+ # output projection
303
+ outputs = self.resid_drop(self.proj(outputs))
304
+
305
+ if output_attentions:
306
+ return outputs, present, attn_output.sum(dim=1) / self.n_heads
307
+ else:
308
+ return outputs, present
309
+
310
+
311
+ class PagnolXlStandardMLP(nn.Module):
312
+ """Implementation of Pagnol's StandardMLP"""
313
+
314
+ def __init__(self, config: PagnolXlConfig):
315
+ super().__init__()
316
+ self.config = config
317
+ self.d_model = config.d_model
318
+ self.d_feedforward = config.d_feedforward
319
+ self.n_layers = config.n_layers
320
+ self.activation = ACT2FN[config.activation_function]
321
+
322
+ self.mlp = nn.Sequential(
323
+ nn.Linear(config.d_model, config.d_feedforward, bias=True),
324
+ self.activation,
325
+ nn.Linear(config.d_feedforward, config.d_model, bias=True),
326
+ )
327
+
328
+ self.init_weights()
329
+
330
+ def init_weights(self):
331
+ std = self.config.sigma / math.sqrt(2.0 * self.n_layers)
332
+
333
+ torch.nn.init.normal_(self.mlp[0].weight, mean=0.0, std=self.config.sigma)
334
+ torch.nn.init.zeros_(self.mlp[0].bias)
335
+
336
+ torch.nn.init.normal_(self.mlp[2].weight, mean=0.0, std=std)
337
+ torch.nn.init.zeros_(self.mlp[2].bias)
338
+
339
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
340
+ return self.mlp(hidden_states)
341
+
342
+
343
+ class PagnolXlLayerNorm(nn.Module):
344
+ """Implementation of Pagnol's LayerNorm"""
345
+
346
+ def __init__(self, config: PagnolXlConfig):
347
+ super().__init__()
348
+ self.config = config
349
+ self.d_model = config.d_model
350
+ self.norm = nn.LayerNorm(self.d_model, eps=config.layer_norm_epsilon)
351
+
352
+ self.init_weights()
353
+
354
+ def init_weights(self):
355
+ nn.init.ones_(self.norm.weight)
356
+ nn.init.zeros_(self.norm.bias)
357
+
358
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
359
+ return self.norm(hidden_states)
360
+
361
+
362
+ class PagnoXlBlock(nn.Module):
363
+ """Transformer block containing the self-attention module and the feedforward module.
364
+ Implemented as a decoder layer of GPT-3."""
365
+
366
+ def __init__(self, config: PagnolXlConfig):
367
+ super().__init__()
368
+ self.d_model = config.d_model
369
+ self.n_layers = config.n_layers
370
+
371
+ self.self_attention = PagnolXlAttention(config)
372
+ self.attn_norm = PagnolXlLayerNorm(config)
373
+ self.attn_dropout = nn.Dropout(config.dropout)
374
+
375
+ self.mlp = PagnolXlStandardMLP(config)
376
+ self.mlp_norm = PagnolXlLayerNorm(config)
377
+ self.mlp_dropout = nn.Dropout(config.dropout)
378
+
379
+ self.init_weights()
380
+
381
+ def init_weights(self):
382
+ self.self_attention.init_weights()
383
+ self.mlp.init_weights()
384
+
385
+ def forward(
386
+ self,
387
+ hidden_states: torch.FloatTensor,
388
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
389
+ attention_mask: Optional[torch.BoolTensor] = None,
390
+ head_mask: Optional[torch.FloatTensor] = None,
391
+ use_cache: Optional[bool] = False,
392
+ output_attentions: Optional[bool] = False,
393
+ ) -> Union[
394
+ Tuple[torch.Tensor],
395
+ Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]],
396
+ ]:
397
+ attn_outputs = self.attn_norm(hidden_states)
398
+ attn_outputs = self.self_attention(
399
+ attn_outputs,
400
+ layer_past=layer_past,
401
+ attention_mask=attention_mask,
402
+ head_mask=head_mask,
403
+ use_cache=use_cache,
404
+ output_attentions=output_attentions,
405
+ )
406
+
407
+ attn_output = attn_outputs[0] # output_attn: a, present, (attentions)
408
+ outputs = attn_outputs[1:]
409
+
410
+ hidden_states = hidden_states + self.attn_dropout(attn_output)
411
+
412
+ feed_forward_hidden_states = self.mlp_norm(hidden_states)
413
+ feed_forward_hidden_states = self.mlp(feed_forward_hidden_states)
414
+ hidden_states = hidden_states + self.mlp_dropout(feed_forward_hidden_states)
415
+
416
+ if use_cache:
417
+ outputs = (hidden_states,) + outputs
418
+ else:
419
+ outputs = (hidden_states,) + outputs[1:]
420
+
421
+ return outputs # hidden_states, present, attentions
422
+
423
+
424
+ class PagnolXlPreTrainedModel(PreTrainedModel):
425
+ config_class = PagnolXlConfig
426
+ base_model_prefix = "pagnolxl"
427
+ supports_gradient_checkpointing = True
428
+ _no_split_modules = ["PagnolXlBlock"]
429
+
430
+ def __init__(self, *inputs, **kwargs):
431
+ super().__init__(*inputs, **kwargs)
432
+
433
+ def _init_weights(self, module):
434
+ if isinstance(module, nn.Embedding):
435
+ module.weight.data.normal_(mean=0.0, std=self.config.sigma)
436
+ if module.padding_idx is not None:
437
+ module.weight.data[module.padding_idx].zero_()
438
+ elif isinstance(module, nn.Linear):
439
+ module.weight.data.normal_(mean=0.0, std=self.config.sigma)
440
+ if module.bias is not None:
441
+ module.bias.data.zero_()
442
+ # TODO: attention out_proj weights are initialized with sigma / sqrt(2.0 * n_layers)
443
+ elif isinstance(module, nn.LayerNorm):
444
+ module.bias.data.zero_()
445
+ module.weight.data.fill_(1.0)
446
+
447
+ # Copied from transformers.models.bloom.modeling_bloom.BloomPreTrainedModel._set_gradient_checkpointing with BloomModel->FalconModel
448
+ def _set_gradient_checkpointing(self, module: nn.Module, value: bool = False):
449
+ if isinstance(module, PagnolXlModel):
450
+ module.gradient_checkpointing = value
451
+
452
+
453
+ class PagnolXlTransformer(PagnolXlPreTrainedModel):
454
+ """Pagnol's Transformer model"""
455
+
456
+ def __init__(self, config: PagnolXlConfig):
457
+ super().__init__(config)
458
+ self.layers = nn.ModuleList(
459
+ [PagnoXlBlock(config) for _ in range(config.n_layers)]
460
+ )
461
+ self.gradient_checkpointing = False
462
+ self.init_weights()
463
+
464
+ def init_weights(self):
465
+ for layer in self.layers:
466
+ layer.init_weights()
467
+
468
+ @staticmethod
469
+ def _prepare_attn_mask(
470
+ attention_mask: torch.Tensor,
471
+ input_shape: Tuple[int, int],
472
+ past_key_values_length: int,
473
+ ) -> torch.BoolTensor:
474
+ # Create a causal mask
475
+ # The attention mask we receive as input should cover the whole extended sequence, including any past
476
+ # cache, so its shape should be [batch_size, seq_length + past_key_values_length]
477
+ # The output shape will be [batch_size, 1, seq_length, seq_length + past_key_values_length]
478
+ if input_shape[1] + past_key_values_length != attention_mask.shape[1]:
479
+ raise ValueError(
480
+ "Attention mask shape should be (batch_size, seq_length + past_key_values_length)"
481
+ f" but is {attention_mask.shape} with input_ids shape {input_shape} and past length"
482
+ f" {past_key_values_length}."
483
+ )
484
+ combined_attention_mask = None
485
+ device = attention_mask.device
486
+ _, seq_length = input_shape
487
+
488
+ if seq_length > 1:
489
+ combined_attention_mask = _make_causal_mask(
490
+ input_shape,
491
+ device=device,
492
+ past_key_values_length=past_key_values_length,
493
+ )
494
+
495
+ # [batch_size, seq_length + past_key_values_length] -> [batch_size, 1, seq_length, seq_length + past_key_values_length]
496
+ expanded_attn_mask = _expand_mask(
497
+ attention_mask, past_key_values_length=past_key_values_length
498
+ )
499
+ combined_attention_mask = (
500
+ expanded_attn_mask
501
+ if combined_attention_mask is None
502
+ else expanded_attn_mask | combined_attention_mask
503
+ )
504
+
505
+ return combined_attention_mask
506
+
507
+ def forward(
508
+ self,
509
+ inputs_embeds: Optional[torch.LongTensor],
510
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
511
+ attention_mask: Optional[torch.Tensor] = None,
512
+ head_mask: Optional[torch.LongTensor] = None,
513
+ use_cache: Optional[bool] = None,
514
+ output_attentions: Optional[bool] = None,
515
+ output_hidden_states: Optional[bool] = None,
516
+ return_dict: Optional[bool] = None,
517
+ ) -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]:
518
+
519
+ output_attentions = (
520
+ output_attentions
521
+ if output_attentions is not None
522
+ else self.config.output_attentions
523
+ )
524
+ output_hidden_states = (
525
+ output_hidden_states
526
+ if output_hidden_states is not None
527
+ else self.config.output_hidden_states
528
+ )
529
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
530
+ return_dict = (
531
+ return_dict if return_dict is not None else self.config.use_return_dict
532
+ )
533
+
534
+ batch_size, seq_length, _ = inputs_embeds.shape
535
+ device = inputs_embeds.device
536
+
537
+ # Prepare head mask if needed
538
+ # 1.0 in head_mask indicate we keep the head
539
+ # attention_probs has shape batch_size x num_heads x N x N
540
+ # head_mask has shape n_layer x batch x num_heads x N x N
541
+ head_mask = self.get_head_mask(head_mask, self.config.n_layers)
542
+
543
+ if past_key_values is None:
544
+ past_length = 0
545
+ past_key_values = tuple([None] * len(self.layers))
546
+ else:
547
+ past_length = past_key_values[0][0].size(-2)
548
+
549
+ hidden_states = inputs_embeds
550
+
551
+ if attention_mask is None:
552
+ attention_mask = torch.ones(
553
+ (batch_size, seq_length + past_length),
554
+ device=hidden_states.device,
555
+ )
556
+ else:
557
+ attention_mask = attention_mask.to(hidden_states.device)
558
+
559
+ causal_mask = self._prepare_attn_mask(
560
+ attention_mask,
561
+ input_shape=(batch_size, seq_length),
562
+ past_key_values_length=past_length,
563
+ )
564
+
565
+ presents = () if use_cache else None
566
+ all_self_attentions = () if output_attentions else None
567
+ all_hidden_states = () if output_hidden_states else None
568
+
569
+ if self.gradient_checkpointing and self.training and use_cache:
570
+ logger.warning_once(
571
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
572
+ )
573
+ use_cache = False
574
+
575
+ for i, (layer, layer_past) in enumerate(zip(self.layers, past_key_values)):
576
+ if output_hidden_states:
577
+ all_hidden_states = all_hidden_states + (hidden_states,)
578
+
579
+ if self.gradient_checkpointing and self.training:
580
+ outputs = self._gradient_checkpointing_func(
581
+ layer.__call__,
582
+ hidden_states,
583
+ None,
584
+ causal_mask,
585
+ head_mask[i],
586
+ use_cache,
587
+ output_attentions,
588
+ )
589
+ else:
590
+ outputs = layer(
591
+ hidden_states,
592
+ layer_past=layer_past,
593
+ attention_mask=causal_mask,
594
+ head_mask=head_mask[i],
595
+ use_cache=use_cache,
596
+ output_attentions=output_attentions,
597
+ )
598
+ hidden_states = outputs[0]
599
+ if use_cache is True:
600
+ presents = presents + (outputs[1],)
601
+
602
+ if output_attentions:
603
+ all_self_attentions = all_self_attentions + (
604
+ outputs[2 if use_cache else 1],
605
+ )
606
+
607
+ if output_hidden_states:
608
+ all_hidden_states = all_hidden_states + (hidden_states,)
609
+
610
+ if not return_dict:
611
+ return tuple(
612
+ v
613
+ for v in [
614
+ hidden_states,
615
+ presents,
616
+ all_hidden_states,
617
+ all_self_attentions,
618
+ ]
619
+ if v is not None
620
+ )
621
+
622
+ return BaseModelOutputWithPastAndCrossAttentions(
623
+ last_hidden_state=hidden_states,
624
+ past_key_values=presents,
625
+ hidden_states=all_hidden_states,
626
+ attentions=all_self_attentions,
627
+ )
628
+
629
+
630
+ class PagnolXlModel(PagnolXlPreTrainedModel):
631
+ def __init__(self, config: PagnolXlConfig):
632
+ super().__init__(config)
633
+ self.config = config
634
+ self.embedding = PagnolXlEmbeddings(config)
635
+ self.transformer = PagnolXlTransformer(config)
636
+ self.final_norm = PagnolXlLayerNorm(config)
637
+ self.projector = PagnolXlLMHead(config)
638
+
639
+ # Initialize weights and apply final processing
640
+ self.post_init()
641
+
642
+ def get_input_embeddings(self):
643
+ return self.embedding.embedding
644
+
645
+ def set_input_embeddings(self, value):
646
+ self.embedding.embedding = value
647
+
648
+ def forward(
649
+ self,
650
+ input_ids: Optional[torch.LongTensor] = None,
651
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
652
+ attention_mask: Optional[torch.Tensor] = None,
653
+ head_mask: Optional[torch.Tensor] = None,
654
+ inputs_embeds: Optional[torch.Tensor] = None,
655
+ use_cache: Optional[bool] = None,
656
+ output_attentions: Optional[bool] = None,
657
+ output_hidden_states: Optional[bool] = None,
658
+ return_dict: Optional[bool] = None,
659
+ ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]:
660
+
661
+ return_dict = (
662
+ return_dict if return_dict is not None else self.config.use_return_dict
663
+ )
664
+
665
+ if input_ids is not None and inputs_embeds is not None:
666
+ raise ValueError(
667
+ "You cannot specify both input_ids and inputs_embeds at the same time"
668
+ )
669
+ elif input_ids is not None:
670
+ batch_size, seq_length = input_ids.shape
671
+ elif inputs_embeds is not None:
672
+ batch_size, seq_length, _ = inputs_embeds.shape
673
+ else:
674
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
675
+
676
+ if inputs_embeds is None:
677
+ inputs_embeds = self.word_embeddings(input_ids)
678
+
679
+ transformer_outputs = self.transformer(
680
+ inputs_embeds,
681
+ past_key_values=past_key_values,
682
+ attention_mask=attention_mask,
683
+ head_mask=head_mask,
684
+ use_cache=use_cache,
685
+ output_attentions=output_attentions,
686
+ output_hidden_states=output_hidden_states,
687
+ return_dict=return_dict,
688
+ )
689
+
690
+ return transformer_outputs
691
+
692
+
693
+ class PagnolXlLMHead(nn.Module):
694
+ """Pagnol's Language Model head Projector"""
695
+
696
+ def __init__(self, config: PagnolXlConfig):
697
+ super().__init__()
698
+ self.proj = nn.Linear(config.d_model, config.vocab_size, bias=False)
699
+
700
+ def init_weights(self):
701
+ torch.nn.init.normal_(self.proj.weight, mean=0.0, std=self.config.sigma)
702
+
703
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
704
+ return self.proj(hidden_states)
705
+
706
+
707
+ class PagnolXlForCausalLM(PagnolXlPreTrainedModel):
708
+ def __init__(self, config: PagnolXlConfig):
709
+ super().__init__(config)
710
+ self.config = config
711
+ self.embedding = PagnolXlEmbeddings(config)
712
+ self.transformer = PagnolXlTransformer(config)
713
+ self.final_norm = PagnolXlLayerNorm(config)
714
+ self.projector = PagnolXlLMHead(config)
715
+
716
+ # Initialize weights and apply final processing
717
+ self.post_init()
718
+
719
+ def get_input_embeddings(self):
720
+ return self.embedding.embedding
721
+
722
+ def set_input_embeddings(self, value):
723
+ self.embedding.embedding = value
724
+
725
+ def prepare_inputs_for_generation(
726
+ self,
727
+ input_ids: torch.LongTensor,
728
+ past_key_values: Optional[torch.Tensor] = None,
729
+ **kwargs,
730
+ ) -> dict:
731
+ # Omit tokens covered by past_key_values
732
+ if past_key_values:
733
+ past_length = past_key_values[0][0].shape[2]
734
+
735
+ # Some generation methods already pass only the last input ID
736
+ if input_ids.shape[1] > past_length:
737
+ remove_prefix_length = past_length
738
+ else:
739
+ # Default to old behavior: keep only final ID
740
+ remove_prefix_length = input_ids.shape[1] - 1
741
+
742
+ input_ids = input_ids[:, remove_prefix_length:]
743
+
744
+ attention_mask = kwargs.get("attention_mask", None)
745
+
746
+ return {
747
+ "input_ids": input_ids,
748
+ "past_key_values": past_key_values,
749
+ "use_cache": kwargs.get("use_cache"),
750
+ "attention_mask": attention_mask,
751
+ }
752
+
753
+ def forward(
754
+ self,
755
+ input_ids: Optional[torch.LongTensor] = None,
756
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
757
+ attention_mask: Optional[torch.Tensor] = None,
758
+ head_mask: Optional[torch.Tensor] = None,
759
+ inputs_embeds: Optional[torch.Tensor] = None,
760
+ labels: Optional[torch.Tensor] = None,
761
+ use_cache: Optional[bool] = None,
762
+ output_attentions: Optional[bool] = None,
763
+ output_hidden_states: Optional[bool] = None,
764
+ return_dict: Optional[bool] = None,
765
+ ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]:
766
+
767
+ return_dict = (
768
+ return_dict if return_dict is not None else self.config.use_return_dict
769
+ )
770
+
771
+ if input_ids is not None and inputs_embeds is not None:
772
+ raise ValueError(
773
+ "You cannot specify both input_ids and inputs_embeds at the same time"
774
+ )
775
+ elif input_ids is not None:
776
+ batch_size, seq_length = input_ids.shape
777
+ elif inputs_embeds is not None:
778
+ batch_size, seq_length, _ = inputs_embeds.shape
779
+ else:
780
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
781
+
782
+ if inputs_embeds is None:
783
+ inputs_embeds = self.embedding(input_ids)
784
+
785
+ transformer_outputs = self.transformer(
786
+ inputs_embeds,
787
+ past_key_values=past_key_values,
788
+ attention_mask=attention_mask,
789
+ head_mask=head_mask,
790
+ use_cache=use_cache,
791
+ output_attentions=output_attentions,
792
+ output_hidden_states=output_hidden_states,
793
+ return_dict=return_dict,
794
+ )
795
+
796
+ hidden_states = transformer_outputs[0]
797
+
798
+ hidden_states = self.final_norm(hidden_states)
799
+
800
+ lm_logits = self.projector(hidden_states)
801
+
802
+ loss = None
803
+ if labels is not None:
804
+ # Shift so that tokens < n predict n
805
+ shift_logits = lm_logits[..., :-1, :].contiguous()
806
+ shift_labels = labels[..., 1:].contiguous()
807
+ batch_size, seq_length, vocab_size = shift_logits.shape
808
+ # Flatten the tokens
809
+ loss_fct = CrossEntropyLoss()
810
+ loss = loss_fct(
811
+ shift_logits.view(batch_size * seq_length, vocab_size),
812
+ shift_labels.view(batch_size * seq_length),
813
+ )
814
+
815
+ if not return_dict:
816
+ output = (lm_logits,) + transformer_outputs[1:]
817
+ return ((loss,) + output) if loss is not None else output
818
+
819
+ return CausalLMOutputWithCrossAttentions(
820
+ loss=loss,
821
+ logits=lm_logits,
822
+ past_key_values=transformer_outputs.past_key_values,
823
+ hidden_states=transformer_outputs.hidden_states,
824
+ attentions=transformer_outputs.attentions,
825
+ )
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3169c2da97dec17fce188e8068dc594957a713f5cefe29af019c47e8ddc65565
3
+ size 6545871601
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "GPT2TokenizerFast",
3
+ "eos_token": {
4
+ "content": "<EOS>",
5
+ "single_word": false,
6
+ "lstrip": false,
7
+ "rstrip": false,
8
+ "normalized": false,
9
+ "__type": "AddedToken"
10
+ },
11
+ "unk_token": {
12
+ "content": "<UNK>",
13
+ "single_word": false,
14
+ "lstrip": false,
15
+ "rstrip": false,
16
+ "normalized": false,
17
+ "__type": "AddedToken"
18
+ },
19
+ "pad_token": {
20
+ "content": "<PAD>",
21
+ "single_word": false,
22
+ "lstrip": false,
23
+ "rstrip": false,
24
+ "normalized": false,
25
+ "__type": "AddedToken"
26
+ },
27
+ "bos_token": {
28
+ "content": "<EOS>",
29
+ "single_word": false,
30
+ "lstrip": false,
31
+ "rstrip": false,
32
+ "normalized": false,
33
+ "__type": "AddedToken"
34
+ },
35
+ "sep_token": {
36
+ "content": "<SEP>",
37
+ "single_word": false,
38
+ "lstrip": false,
39
+ "rstrip": false,
40
+ "normalized": false,
41
+ "__type": "AddedToken"
42
+ },
43
+ "mask_token": {
44
+ "content": "<MASK>",
45
+ "single_word": false,
46
+ "lstrip": false,
47
+ "rstrip": false,
48
+ "normalized": false,
49
+ "__type": "AddedToken"
50
+ },
51
+ "model_max_length": 2048
52
+ }