Doron Adler commited on
Commit
6e1c9c6
1 Parent(s): 9795464

* Updated model card

Browse files

* Added sample model converters

README.md CHANGED
@@ -3,9 +3,9 @@ language: he
3
 
4
  thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
5
  widget:
6
- - text: "עוד בימי קדם"
7
- - text: "קוראים לי דורון ואני מעוניין ל"
8
- - text: "קוראים לי איציק ואני חושב ש"
9
  - text: "החתול שלך מאוד חמוד ו"
10
 
11
  license: mit
@@ -13,18 +13,28 @@ license: mit
13
 
14
  # hebrew-distilgpt2
15
 
16
- A tiny GPT2 based Hebrew text generation model trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
17
 
18
  ## Dataset
19
 
20
- oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
21
 
22
  The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
23
 
 
 
 
 
 
 
 
 
 
24
  ## Training
25
 
26
  * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR>
27
  * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351)
 
28
 
29
  ## Usage
30
 
@@ -33,77 +43,25 @@ The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtai
33
 
34
  ```python
35
 
 
36
 
37
- from transformers import AutoTokenizer, AutoModelForCausalLM
38
-
39
- #pip install tokenizers==0.10.3 transformers==4.8.0
40
-
41
- tokenizer = AutoTokenizer.from_pretrained("Norod78/distilgpt2-base-pretrained-he")
42
- model = AutoModelForCausalLM.from_pretrained("Norod78/distilgpt2-base-pretrained-he", pad_token_id=tokenizer.eos_token_id)
43
-
44
- prompt_text = "הנבחרת האולימפית של ישראל זכתה השנה"
45
- max_len = 50
46
- sample_output_num = 3
47
- seed = 1000
48
-
49
- import numpy as np
50
- import torch
51
-
52
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
53
- n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
54
-
55
- print(f"device: {device}, n_gpu: {n_gpu}")
56
-
57
- np.random.seed(seed)
58
- torch.manual_seed(seed)
59
- if n_gpu > 0:
60
- torch.cuda.manual_seed_all(seed)
61
-
62
- model.to(device)
63
-
64
- encoded_prompt = tokenizer.encode(
65
- prompt_text, add_special_tokens=False, return_tensors="pt")
66
-
67
- encoded_prompt = encoded_prompt.to(device)
68
-
69
- if encoded_prompt.size()[-1] == 0:
70
- input_ids = None
71
- else:
72
- input_ids = encoded_prompt
73
-
74
- print("input_ids = " + str(input_ids))
75
-
76
- if input_ids != None:
77
- max_len += len(encoded_prompt[0])
78
- if max_len > 1024:
79
- max_len = 1024
80
-
81
- print("Updated max_len = " + str(max_len))
82
-
83
- stop_token = "<|endoftext|>"
84
- new_lines = "\n\n\n"
85
-
86
- sample_outputs = model.generate(
87
- input_ids,
88
- do_sample=True,
89
- max_length=max_len,
90
- top_k=50,
91
- top_p=0.95,
92
- num_return_sequences=sample_output_num
93
- )
94
 
95
- print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
96
- for i, sample_output in enumerate(sample_outputs):
97
 
98
- text = tokenizer.decode(sample_output, skip_special_tokens=True)
99
-
100
- # Remove all text after the stop token
101
- text = text[: text.find(stop_token) if stop_token else None]
 
102
 
103
- # Remove all text after 3 newlines
104
- text = text[: text.find(new_lines) if new_lines else None]
105
 
106
- print("\n{}: {}".format(i, text))
107
- print("\n" + 100 * '-')
108
 
 
 
109
  ```
 
3
 
4
  thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
5
  widget:
6
+ - text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה"
7
+ - text: "שלום, קרואים לי"
8
+ - text: "הארי פוטר חייך חיוך נבוך"
9
  - text: "החתול שלך מאוד חמוד ו"
10
 
11
  license: mit
 
13
 
14
  # hebrew-distilgpt2
15
 
16
+ A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU.
17
 
18
  ## Dataset
19
 
20
+ ### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
21
 
22
  The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
23
 
24
+ ### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/)
25
+
26
+ This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.
27
+
28
+ ### Misc
29
+ * Hebrew Twitter
30
+ * Wikipedia
31
+ * Various other sources
32
+
33
  ## Training
34
 
35
  * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR>
36
  * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351)
37
+ * Further training was performed on GPU
38
 
39
  ## Usage
40
 
 
43
 
44
  ```python
45
 
46
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
47
 
48
+ def main():
49
+ model_name="Norod78/distilgpt2-base-pretrained-he"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
+ prompt_text = "שלום, קוראים לי"
52
+ generated_max_length = 192
53
 
54
+ print("Loading model...")
55
+ model = AutoModelForCausalLM.from_pretrained(model_name)
56
+ print('Loading Tokenizer...')
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+ text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
59
 
60
+ print("Generating text...")
61
+ result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length)
62
 
63
+ print("result = " + str(result))
 
64
 
65
+ if __name__ == '__main__':
66
+ main()
67
  ```
converters/convert2coreml.py ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Recreate the Core ML model from scratch using
3
+ coremltools' neural_network.NeuralNetworkBuilder
4
+ """
5
+ import coremltools
6
+ import coremltools.models.datatypes as datatypes
7
+ from coremltools.models import neural_network as neural_network
8
+ from coremltools.models.utils import save_spec
9
+ import numpy as np
10
+
11
+ # get weights
12
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer
13
+
14
+ model_name = "./distilgpt2-base-pretrained-he"
15
+ save_directory = "tmp/coreml/"
16
+ #!mkdir -p $save_directory
17
+ file_name = "model.mlmodel"
18
+
19
+ tokenizer = GPT2Tokenizer.from_pretrained(model_name)
20
+ lm_head_model = GPT2LMHeadModel.from_pretrained(model_name).eval()
21
+ model = lm_head_model.transformer
22
+
23
+ wte = model.wte.weight.data.numpy().transpose() # shape (768, 50257) /!\ i hate this
24
+ wpe = model.wpe.weight.data.numpy().transpose() # shape (768, 1024)
25
+
26
+ sequence_length = 64
27
+ steps = 6
28
+
29
+ # build model
30
+ input_features = [
31
+ ('input_ids', datatypes.Array(sequence_length)),
32
+ ('position_ids', datatypes.Array(sequence_length)),
33
+ ]
34
+ output_features = [('output_logits', None)]
35
+
36
+ builder = neural_network.NeuralNetworkBuilder(
37
+ input_features,
38
+ output_features,
39
+ mode=None,
40
+ disable_rank5_shape_mapping=True,
41
+ )
42
+ builder.add_expand_dims(
43
+ name='input_ids_expanded_to_rank5',
44
+ input_name='input_ids',
45
+ output_name='input_ids_expanded_to_rank5',
46
+ axes=(1, 2, 3, 4)
47
+ )
48
+ builder.add_expand_dims(
49
+ name='position_ids_expanded_to_rank5',
50
+ input_name='position_ids',
51
+ output_name='position_ids_expanded_to_rank5',
52
+ axes=(1, 2, 3, 4)
53
+ )
54
+ builder.add_embedding(
55
+ name='token_embeddings',
56
+ input_name='input_ids_expanded_to_rank5',
57
+ output_name='token_embeddings',
58
+ W=wte,
59
+ b=None,
60
+ input_dim=50257,
61
+ output_channels=768,
62
+ has_bias=False,
63
+ )
64
+ builder.add_embedding(
65
+ name='positional_embeddings',
66
+ input_name='position_ids_expanded_to_rank5',
67
+ output_name='positional_embeddings',
68
+ W=wpe,
69
+ b=None,
70
+ input_dim=1024,
71
+ output_channels=768,
72
+ has_bias=False,
73
+ )
74
+
75
+ # Input:, Output: (seq, 1, 768, 1, 1)
76
+ builder.add_add_broadcastable(
77
+ name='embeddings_addition',
78
+ input_names=['token_embeddings', 'positional_embeddings'],
79
+ output_name=f'{0}_previous_block'
80
+ )
81
+
82
+ for i in range(steps):
83
+ print(i)
84
+ ln_weight = model.h[i].ln_1.weight.data.numpy().reshape((1, 1, 768, 1, 1))
85
+ ln_bias = model.h[i].ln_1.bias.data.numpy().reshape((1, 1, 768, 1, 1))
86
+ ln_epsilon = model.h[i].ln_1.eps
87
+
88
+ builder.add_mvn(
89
+ name=f"{i}_block_ln_1",
90
+ input_name=f"{i}_previous_block",
91
+ # output_name=f"{i}_block_ln_1_output",
92
+ output_name=f"{i}_block_ln_1",
93
+ across_channels=True,
94
+ normalize_variance=True,
95
+ epsilon=ln_epsilon
96
+ )
97
+
98
+ builder.add_scale(
99
+ name=f"{i}_block_ln_1_scaled",
100
+ input_name=f"{i}_block_ln_1",
101
+ output_name=f"{i}_block_ln_1_scaled",
102
+ W=ln_weight,
103
+ b=ln_bias,
104
+ has_bias=True,
105
+ shape_scale=[768],
106
+ shape_bias=[768]
107
+ )
108
+
109
+ builder.add_transpose(
110
+ name=f"{i}_block_ln_1_reshape",
111
+ input_name=f"{i}_block_ln_1_scaled",
112
+ output_name=f"{i}_block_ln_1_scaled_transposed",
113
+ axes=(1, 0, 2, 3, 4)
114
+ )
115
+
116
+
117
+ conv_1D_bias = model.h[i].attn.c_attn.bias.data.numpy().reshape((1, 1, 2304, 1, 1))
118
+ conv_1D_weights = model.h[i].attn.c_attn.weight.data.numpy().transpose().reshape((1, 768, 2304, 1, 1))
119
+
120
+ builder.add_inner_product(
121
+ name=f"{i}_block_attn_conv",
122
+ input_name=f"{i}_block_ln_1_scaled_transposed",
123
+ output_name=f"{i}_block_attn_conv",
124
+ input_channels=768,
125
+ output_channels=2304,
126
+ W=conv_1D_weights,
127
+ b=conv_1D_bias,
128
+ has_bias=True
129
+ )
130
+
131
+ builder.add_split(
132
+ name=f"{i}_block_attn_qkv_split",
133
+ input_name=f"{i}_block_attn_conv",
134
+ output_names=[f"{i}_block_attn_q", f"{i}_block_attn_k", f"{i}_block_attn_v"]
135
+ )
136
+
137
+ builder.add_rank_preserving_reshape(
138
+ name=f"{i}_block_attn_q_reshape",
139
+ input_name=f"{i}_block_attn_q",
140
+ output_name=f"{i}_block_attn_q_reshape",
141
+ output_shape=(1, 1, sequence_length, 12, 64)
142
+ )
143
+
144
+ builder.add_transpose(
145
+ name=f"{i}_block_attn_q_reshape_permuted",
146
+ input_name=f"{i}_block_attn_q_reshape",
147
+ output_name=f"{i}_block_attn_q_reshape_permuted",
148
+ axes=(0, 1, 3, 2, 4)
149
+ )
150
+
151
+ builder.add_rank_preserving_reshape(
152
+ name=f"{i}_block_attn_k_reshape",
153
+ input_name=f"{i}_block_attn_k",
154
+ output_name=f"{i}_block_attn_k_reshape",
155
+ output_shape=(1, 1, sequence_length, 12, 64)
156
+ )
157
+
158
+ builder.add_transpose(
159
+ name=f"{i}_block_attn_k_reshape_permuted",
160
+ input_name=f"{i}_block_attn_k_reshape",
161
+ output_name=f"{i}_block_attn_k_reshape_permuted",
162
+ axes=(0, 1, 3, 4, 2)
163
+ )
164
+
165
+ builder.add_rank_preserving_reshape(
166
+ name=f"{i}_block_attn_v_reshape",
167
+ input_name=f"{i}_block_attn_v",
168
+ output_name=f"{i}_block_attn_v_reshape",
169
+ output_shape=(1, 1, sequence_length, 12, 64)
170
+ )
171
+
172
+ builder.add_transpose(
173
+ name=f"{i}_block_attn_v_reshape_permuted",
174
+ input_name=f"{i}_block_attn_v_reshape",
175
+ output_name=f"{i}_block_attn_v_reshape_permuted",
176
+ axes=(0, 1, 3, 2, 4)
177
+ )
178
+
179
+ builder.add_batched_mat_mul(
180
+ name=f"{i}_block_attn_qv_matmul",
181
+ input_names=[f"{i}_block_attn_q_reshape_permuted", f"{i}_block_attn_k_reshape_permuted"],
182
+ output_name=f"{i}_block_attn_qv_matmul"
183
+ )
184
+
185
+ builder.add_scale(
186
+ name=f"{i}_block_attn_qv_matmul_scaled",
187
+ input_name=f"{i}_block_attn_qv_matmul",
188
+ output_name=f"{i}_block_attn_qv_matmul_scaled",
189
+ W=np.array(1/8),
190
+ b=0,
191
+ has_bias=False
192
+ )
193
+
194
+ bias_0 = model.h[i].attn.bias
195
+ nd = ns = sequence_length
196
+ b = (model.h[i].attn.bias[:, :, ns-nd:ns, :ns]).unsqueeze(0)
197
+
198
+ builder.add_scale(
199
+ name=f"{i}_block_attn_bias",
200
+ input_name=f"{i}_block_attn_qv_matmul_scaled",
201
+ output_name=f"{i}_block_attn_bias",
202
+ W=b,
203
+ b=None,
204
+ has_bias=False,
205
+ shape_scale=[1, sequence_length, sequence_length]
206
+ )
207
+
208
+ bias_constant_0 = - 1e4 * (1 - b)
209
+
210
+ builder.add_bias(
211
+ name=f"{i}_block_attn_afterbias",
212
+ input_name=f"{i}_block_attn_bias",
213
+ output_name=f"{i}_block_attn_afterbias",
214
+ # output_name=f"output_logits",
215
+ b=bias_constant_0,
216
+ shape_bias=[1, sequence_length, sequence_length],
217
+ )
218
+
219
+ builder.add_squeeze(
220
+ name=f"{i}_squeezit",
221
+ input_name=f"{i}_block_attn_afterbias",
222
+ output_name=f"{i}_squeezit",
223
+ axes=[0, 1]
224
+ )
225
+
226
+ builder.add_softmax(
227
+ name=f"{i}_block_attn_softmax",
228
+ input_name=f"{i}_squeezit",
229
+ output_name=f"{i}_block_attn_softmax",
230
+ )
231
+
232
+ builder.add_expand_dims(
233
+ name=f"{i}_expandit",
234
+ input_name=f"{i}_block_attn_softmax",
235
+ output_name=f"{i}_expandit",
236
+ axes=[0, 1]
237
+ )
238
+
239
+ builder.add_batched_mat_mul(
240
+ name=f"{i}_block_full_attention",
241
+ input_names=[f"{i}_expandit", f"{i}_block_attn_v_reshape_permuted"],
242
+ output_name=f"{i}_block_full_attention"
243
+ )
244
+
245
+ builder.add_transpose(
246
+ name=f"{i}_block_full_attention_merged_t",
247
+ input_name=f"{i}_block_full_attention",
248
+ output_name=f"{i}_block_full_attention_merged_t",
249
+ axes=[0, 1, 3, 2, 4]
250
+ )
251
+
252
+ builder.add_rank_preserving_reshape(
253
+ name=f"{i}_block_full_attention_merged",
254
+ input_name=f"{i}_block_full_attention_merged_t",
255
+ output_name=f"{i}_block_full_attention_merged",
256
+ output_shape=[1, 1, 1, sequence_length, 768]
257
+ )
258
+
259
+ builder.add_transpose(
260
+ name=f"{i}_block_attn_conv_proj_t",
261
+ input_name=f"{i}_block_full_attention_merged",
262
+ output_name=f"{i}_block_attn_conv_proj_t",
263
+ axes=[0, 3, 4, 1, 2]
264
+ )
265
+
266
+ conv_1D_proj_bias = model.h[i].attn.c_proj.bias.data.numpy().reshape((1, 1, 768, 1, 1))
267
+ conv_1D_proj_weights = model.h[i].attn.c_proj.weight.data.numpy().transpose().reshape((1, 768, 768, 1, 1))
268
+
269
+ # Input:, Output: (1, 3, 768, 1, 1)
270
+ builder.add_inner_product(
271
+ name=f"{i}_block_attn_conv_proj",
272
+ input_name=f"{i}_block_attn_conv_proj_t",
273
+ output_name=f"{i}_block_attn_conv_proj",
274
+ input_channels=768,
275
+ output_channels=768,
276
+ W=conv_1D_proj_weights,
277
+ b=conv_1D_proj_bias,
278
+ has_bias=True
279
+ )
280
+
281
+ # Input: (seq, 1, 768, 1, 1), Output: (1, seq, 768, 1, 1)
282
+ builder.add_transpose(
283
+ name=f"{i}_previous_block_t",
284
+ input_name=f'{i}_previous_block',
285
+ output_name=f"{i}_previous_block_t",
286
+ axes=[1, 0, 2, 3, 4]
287
+ )
288
+
289
+ # Input: [(1, seq, 768, 1, 1), (1, seq, 768, 1, 1)], Output: (1, seq, 768, 1, 1)
290
+ builder.add_add_broadcastable(
291
+ name=f"{i}_block_xa_sum",
292
+ input_names=[f"{i}_previous_block_t", f"{i}_block_attn_conv_proj"],
293
+ output_name=f"{i}_block_xa_sum",
294
+ # output_name=f"output_logits"
295
+ )
296
+
297
+ ln_2_weight = model.h[i].ln_2.weight.data.numpy().reshape((1, 1, 768, 1, 1))
298
+ ln_2_bias = model.h[i].ln_2.bias.data.numpy().reshape((1, 1, 768, 1, 1))
299
+ ln_2_epsilon = model.h[i].ln_2.eps
300
+
301
+ # Input: (1, seq, 768, 1, 1), Output:
302
+ builder.add_mvn(
303
+ name=f"{i}_block_ln_2",
304
+ input_name=f"{i}_block_xa_sum",
305
+ output_name=f"{i}_block_ln_2",
306
+ across_channels=True,
307
+ normalize_variance=True,
308
+ epsilon=ln_2_epsilon
309
+ )
310
+
311
+ builder.add_scale(
312
+ name=f"{i}_block_ln_2_scaled",
313
+ input_name=f"{i}_block_ln_2",
314
+ # output_name=f"output_logits",
315
+ output_name=f"{i}_block_ln_2_scaled",
316
+ W=ln_2_weight,
317
+ b=ln_2_bias,
318
+ has_bias=True,
319
+ shape_scale=[768],
320
+ shape_bias=[768]
321
+ )
322
+
323
+ mlp_conv_1D_fc_bias = model.h[i].mlp.c_fc.bias.data.numpy().reshape((1, 1, 3072, 1, 1))
324
+ mlp_conv_1D_fc_weights = model.h[i].mlp.c_fc.weight.data.numpy().transpose().reshape((1, 768, 3072, 1, 1))
325
+
326
+ # Input:, Output: (1, 3, 3072, 1, 1)
327
+ builder.add_inner_product(
328
+ name=f"{i}_block_mlp_conv_fc",
329
+ input_name=f"{i}_block_ln_2_scaled",
330
+ output_name=f"{i}_block_mlp_conv_fc",
331
+ # output_name=f"output_logits",
332
+ input_channels=768,
333
+ output_channels=3072,
334
+ W=mlp_conv_1D_fc_weights,
335
+ b=mlp_conv_1D_fc_bias,
336
+ has_bias=True
337
+ )
338
+
339
+ builder.add_gelu(
340
+ name=f"{i}_block_mlp_gelu",
341
+ input_name=f"{i}_block_mlp_conv_fc",
342
+ output_name=f"{i}_block_mlp_gelu",
343
+ # output_name=f"output_logits",
344
+ mode='TANH_APPROXIMATION'
345
+ )
346
+
347
+ mlp_conv_1D_proj_bias = model.h[i].mlp.c_proj.bias.data.numpy().reshape((1, 1, 768, 1, 1))
348
+ mlp_conv_1D_proj_weights = model.h[i].mlp.c_proj.weight.data.numpy().transpose().reshape((1, 3072, 768, 1, 1))
349
+
350
+ # Input:, Output: (1, 3, 3072, 1, 1)
351
+ builder.add_inner_product(
352
+ name=f"{i}_block_mlp_conv_proj",
353
+ input_name=f"{i}_block_mlp_gelu",
354
+ output_name=f"{i}_block_mlp_conv_proj",
355
+ # output_name=f"output_logits",
356
+ input_channels=3072,
357
+ output_channels=768,
358
+ W=mlp_conv_1D_proj_weights,
359
+ b=mlp_conv_1D_proj_bias,
360
+ has_bias=True
361
+ )
362
+
363
+ builder.add_add_broadcastable(
364
+ name=f"{i}_block_xm_sum",
365
+ input_names=[f"{i}_block_xa_sum", f"{i}_block_mlp_conv_proj"],
366
+ # output_name=f"output_logits"
367
+ output_name=f"{i + 1}_previous_block_final"
368
+ )
369
+
370
+ builder.add_transpose(
371
+ name=f"{i}_block_xm_sum_t",
372
+ input_name=f"{i + 1}_previous_block_final",
373
+ output_name=f"{i + 1}_previous_block",
374
+ axes=[1, 0, 2, 3, 4]
375
+ )
376
+
377
+
378
+ ln_f_weight = model.ln_f.weight.data.numpy().reshape((1, 1, 768, 1, 1))
379
+ ln_f_bias = model.ln_f.bias.data.numpy().reshape((1, 1, 768, 1, 1))
380
+ ln_f_epsilon = model.ln_f.eps
381
+
382
+ # Input: (1, seq, 768, 1, 1), Output:
383
+ builder.add_mvn(
384
+ name=f"ln_f",
385
+ input_name=f"{steps}_previous_block_final",
386
+ output_name=f"ln_f",
387
+ # output_name=f"output_logits",
388
+ across_channels=True,
389
+ normalize_variance=True,
390
+ epsilon=ln_f_epsilon
391
+ )
392
+
393
+ builder.add_scale(
394
+ name=f"ln_f_scaled",
395
+ input_name=f"ln_f",
396
+ output_name=f"ln_f_scaled",
397
+ # output_name=f"output_logits",
398
+ W=ln_f_weight,
399
+ b=ln_f_bias,
400
+ has_bias=True,
401
+ shape_scale=[768],
402
+ shape_bias=[768]
403
+ )
404
+
405
+ lm_head_weights = lm_head_model.lm_head.weight.data.numpy().reshape((1, 50257, 768, 1, 1))
406
+
407
+ builder.add_inner_product(
408
+ name="lm_head",
409
+ input_name="ln_f_scaled",
410
+ output_name="output_logits",
411
+ input_channels=768,
412
+ output_channels=50257,
413
+ W=lm_head_weights,
414
+ b=None,
415
+ has_bias=False
416
+ )
417
+
418
+ # compile spec to model
419
+ mlmodel = coremltools.models.MLModel(builder.spec)
420
+
421
+ #save_spec(builder.spec, f'./{model_name}-{sequence_length}-{steps}.mlmodel')
422
+ save_spec(builder.spec, f'./{save_directory}{file_name}')
423
+ # model = coremltools.models.MLModel('gpt2.mlmodel')
424
+
425
+ # input_ids = np.zeros(sequence_length)
426
+ # position_ids = np.arange(sequence_length).astype(np.float)
427
+
428
+ # input_data = {
429
+ # 'input_ids': input_ids,
430
+ # 'position_ids': position_ids,
431
+ # }
432
+
433
+ # predictions = mlmodel.predict(input_data)["output_logits"]
434
+ # equal = np.amax(predictions - mlp_conv_proj.detach().numpy())
435
+
436
+ # print(predictions)
437
+
438
+
439
+ # save_spec(builder.spec, 'gpt2.mlmodel')
converters/convert2flax.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import logging
3
+
4
+ import numpy as np
5
+ import torch
6
+ import os
7
+ from transformers import AutoConfig, FlaxAutoModelForCausalLM
8
+
9
+ logging.basicConfig(
10
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
11
+ datefmt="%m/%d/%Y %H:%M:%S",
12
+ level=logging.INFO,
13
+ )
14
+ logger = logging.getLogger(__name__)
15
+
16
+ model_path = "./distilgpt2-base-pretrained-he"
17
+ save_directory = "./tmp/flax/"
18
+
19
+ config_path = os.path.join(model_path, 'config.json')
20
+
21
+ # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
22
+ config = AutoConfig.from_pretrained(config_path)
23
+ model = FlaxAutoModelForCausalLM.from_pretrained(model_path, from_pt=True, config=config)
24
+ model.save_pretrained(save_directory)
converters/convert2onnx.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python
2
+ # -*- coding: utf-8 -*-
3
+
4
+ import transformers
5
+ from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel, AutoConfig
6
+ from transformers.onnx import FeaturesManager, convert, export
7
+ from pathlib import Path
8
+ import os
9
+
10
+ model_id = "./distilgpt2-base-pretrained-he"
11
+ export_folder = "tmp/onnx/"
12
+ file_name = "model.onnx"
13
+
14
+ print('Loading tokenizer...')
15
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
16
+ print('Saving tokenizer to ', export_folder)
17
+ tokenizer.save_pretrained(export_folder)
18
+ print('Loading model...')
19
+ model = AutoModelForCausalLM.from_pretrained(model_id)
20
+
21
+ feature= "causal-lm"
22
+ model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature)
23
+ onnx_config = model_onnx_config(model.config)
24
+
25
+ print("model_kind = {0}\nonx_config = {1}\n".format(model_kind, onnx_config))
26
+
27
+ onnx_path = Path(export_folder+file_name)
28
+
29
+ print('Exporting model to ', onnx_path)
30
+ onnx_inputs, onnx_outputs = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
31
+ print('Done')
converters/convert2tf.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Requires transformers >= 4.21.0;
2
+ # Sampling outputs may differ, depending on your hardware.
3
+ from transformers import AutoTokenizer, TFAutoModelForCausalLM
4
+
5
+ model_checkpoint = "./distilgpt2-base-pretrained-he"
6
+ save_directory = "tmp/tf/"
7
+ file_name = "tf_model.h5"
8
+
9
+ tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
10
+ model = TFAutoModelForCausalLM.from_pretrained(model_checkpoint, from_pt=True)
11
+ model.config.pad_token_id = model.config.eos_token_id
12
+ inputs = tokenizer(["צחוקים ושיגועים"], return_tensors="tf")
13
+
14
+ generated = model.generate(**inputs, do_sample=True, seed=(42, 0))
15
+ print("Sampling output: ", tokenizer.decode(generated[0]))
16
+
17
+ model.save_pretrained(save_directory, file_name=file_name)
18
+ tokenizer.save_pretrained(save_directory)
19
+
20
+ # > Sampling output: TensorFlow is a great learning platform for learning about
21
+ # data structure and structure in data science..