Datasets:

Modalities:
Image
Size:
< 1K
Libraries:
Datasets
License:
zhongdongy commited on
Commit
63c39dc
1 Parent(s): a7ce7fd

deepspeed-flan-t5-summarization-cn done (#57)

Browse files

- deepseed-flan-t5-summarization-cn done (709ecdad93bd513af718fecd6d5c719607fe6648)

deepseed-flan-t5-summarization-cn.ipynb ADDED
@@ -0,0 +1,498 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "attachments": {},
5
+ "cell_type": "markdown",
6
+ "metadata": {},
7
+ "source": [
8
+ "# 使用 DeepSpeed 和 Hugging Face Transformer 微调 FLAN-T5 XL/XXL\n",
9
+ "\n",
10
+ "[Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) 论文发布了 FLAN-T5 模型,它是 T5 模型的增强版。FLAN-T5 由很多各种各样的任务微调而得,因此,简单来讲,它就是个方方面面都更优的 T5 模型。相同参数量的条件下,FLAN-T5 的性能相比 T5 而言有两位数的提高。Google 在 Hugging Face 上开源了 [5 个 FLAN-T5 的 checkpoints](https://huggingface.co/models?other=arxiv:2210.11416),参数量范围从 8000 万 到 110 亿。\n",
11
+ "\n",
12
+ "在之前的一篇博文中,我们已经学习了如何 [针对聊天对话数据摘要生成任务微调 FLAN-T5](https://www.philschmid.de/fine-tune-flan-t5),那时我们使用的是 [Base (250M参数)](https://huggingface.co/google/flan-t5-base)模型。本文,我们将研究如何将训练从 Base 扩展到 [XL (30 亿参数)](https://huggingface.co/google/flan-t5-xl) 或 [XXL (110 亿参数)](https://huggingface.co/google/flan-t5-xxl)。\n",
13
+ "\n",
14
+ "这意味着我们将学习如何利用模型并行、多 GPU 以及 [DeepSpeed ZeRO](https://www.deepspeed.ai/tutorials/zero/) 来微调 FLAN-T5 XL 和 XXL。\n",
15
+ "\n",
16
+ "除了作为教程的部分之外,我们还跑了一系列实验,这些实验数据可以帮助你选择正确的硬件设置。你可以在*结果和实验*部分找到详细信息。"
17
+ ]
18
+ },
19
+ {
20
+ "cell_type": "code",
21
+ "execution_count": 1,
22
+ "metadata": {},
23
+ "outputs": [],
24
+ "source": [
25
+ "# install git lfs for pushing artifacts\n",
26
+ "!sudo apt install git-lfs\n",
27
+ "# install torch with the correct cuda version, check nvcc --version\n",
28
+ "!pip install torch --extra-index-url https://download.pytorch.org/whl/cu116 --upgrade\n",
29
+ "# install Hugging Face Libraries\n",
30
+ "!pip install \"transformers==4.26.0\" \"datasets==2.9.0\" \"accelerate==0.16.0\" \"evaluate==0.4.0\" --upgrade\n",
31
+ "# install deepspeed and ninja for jit compilations of kernels\n",
32
+ "!pip install \"deepspeed==0.8.0\" ninja --upgrade\n",
33
+ "# install additional dependencies needed for training\n",
34
+ "!pip install rouge-score nltk py7zr tensorboard"
35
+ ]
36
+ },
37
+ {
38
+ "attachments": {},
39
+ "cell_type": "markdown",
40
+ "metadata": {},
41
+ "source": [
42
+ "# 处理数据集"
43
+ ]
44
+ },
45
+ {
46
+ "attachments": {},
47
+ "cell_type": "markdown",
48
+ "metadata": {},
49
+ "source": [
50
+ "与[针对聊天对话的摘要生成任务微调 FLAN-T5](https://www.philschmid.de/fine-tune-flan-t5)一文中类似,我们需要先准备一个用于微调的数据集。本文,我们将在 [CNN Dailymail 数据集](https://huggingface.co/datasets/cnn_dailymail) 上微调 [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl)。我们不会赘述如何生成数据集,如果你想了解数据集生成的详细步骤,请参阅[上一篇文章](https://www.philschmid.de/fine-tune-flan-t5)。\n",
51
+ "\n",
52
+ "我们定义了一些参数,本文的示例都会基于这些参数,但你可以根据实际需要进行调整。"
53
+ ]
54
+ },
55
+ {
56
+ "cell_type": "code",
57
+ "execution_count": 1,
58
+ "metadata": {},
59
+ "outputs": [],
60
+ "source": [
61
+ "# 实验配置\n",
62
+ "model_id = \"google/flan-t5-xxl\" # Hugging Face 模型 Id\n",
63
+ "dataset_id = \"cnn_dailymail\" # Hugging Face 数据集 Id\n",
64
+ "dataset_config = \"3.0.0\" # 数据集版本\n",
65
+ "save_dataset_path = \"data\" # 存放处理后数据的本地路径\n",
66
+ "text_column = \"article\" # 输入文本所属列\n",
67
+ "summary_column = \"highlights\" # 输出文本所属列\n",
68
+ "# 定制指令提示格式\n",
69
+ "prompt_template = f\"Summarize the following news article:\\n{{input}}\\nSummary:\\n\""
70
+ ]
71
+ },
72
+ {
73
+ "attachments": {},
74
+ "cell_type": "markdown",
75
+ "metadata": {},
76
+ "source": [
77
+ "与 [之前的示例](https://www.philschmid.de/fine-tune-flan-t5) 不同,这次我们把预处理和训练分开。这样我们就可以在非 GPU 实例上运行预处理。我们先对数据集进行预处理(即分词)并将其保存到磁盘,然后训练脚本再从磁盘中加载预处理后的数据集。"
78
+ ]
79
+ },
80
+ {
81
+ "cell_type": "code",
82
+ "execution_count": null,
83
+ "metadata": {},
84
+ "outputs": [],
85
+ "source": [
86
+ "from datasets import load_dataset\n",
87
+ "from transformers import AutoTokenizer\n",
88
+ "import numpy as np \n",
89
+ "\n",
90
+ "# Load dataset from the hub\n",
91
+ "dataset = load_dataset(dataset_id,name=dataset_config)\n",
92
+ "# Load tokenizer of FLAN-t5-base\n",
93
+ "tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
94
+ "\n",
95
+ "print(f\"Train dataset size: {len(dataset['train'])}\")\n",
96
+ "print(f\"Test dataset size: {len(dataset['test'])}\")\n",
97
+ "\n",
98
+ "# Train dataset size: 287113\n",
99
+ "# Test dataset size: 11490"
100
+ ]
101
+ },
102
+ {
103
+ "attachments": {},
104
+ "cell_type": "markdown",
105
+ "metadata": {},
106
+ "source": [
107
+ "我们在配置文件中定义了一个 `prompt_template`,其可用于来构建指令提示,以提高我们模型的性能。 `prompt_template` 有“固定”的开始词和结束词,文档放在中间。这意味着我们需要确保 *“固定”模板词 + 文档* 总长不超过模型支持的最大序列长度。因此我们需要计算模型支持的最大文档长度,稍后我们会根据它来填充或截断模板中的文档。"
108
+ ]
109
+ },
110
+ {
111
+ "cell_type": "code",
112
+ "execution_count": 3,
113
+ "metadata": {},
114
+ "outputs": [
115
+ {
116
+ "name": "stdout",
117
+ "output_type": "stream",
118
+ "text": [
119
+ "Prompt length: 12\n",
120
+ "Max input length: 500\n"
121
+ ]
122
+ }
123
+ ],
124
+ "source": [
125
+ "prompt_length = len(tokenizer(prompt_template.format(input=\"\"))[\"input_ids\"])\n",
126
+ "max_sample_length = tokenizer.model_max_length - prompt_length\n",
127
+ "print(f\"Prompt length: {prompt_length}\")\n",
128
+ "print(f\"Max input length: {max_sample_length}\")\n",
129
+ "\n",
130
+ "# Prompt length: 12\n",
131
+ "# Max input length: 500"
132
+ ]
133
+ },
134
+ {
135
+ "attachments": {},
136
+ "cell_type": "markdown",
137
+ "metadata": {},
138
+ "source": [
139
+ "现在我们知道,模型支持的最大输入文档长度为 500。除了输入之外,我们还需要知道最大“目标”序列长度,我们可以通过遍历数据集中的摘要长度来得到。(代码需要运行几分钟)"
140
+ ]
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "execution_count": 4,
145
+ "metadata": {},
146
+ "outputs": [
147
+ {
148
+ "data": {
149
+ "application/json": {
150
+ "ascii": false,
151
+ "bar_format": null,
152
+ "colour": null,
153
+ "elapsed": 0.012465238571166992,
154
+ "initial": 0,
155
+ "n": 0,
156
+ "ncols": null,
157
+ "nrows": null,
158
+ "postfix": null,
159
+ "prefix": "",
160
+ "rate": null,
161
+ "total": 299,
162
+ "unit": "ba",
163
+ "unit_divisor": 1000,
164
+ "unit_scale": false
165
+ },
166
+ "application/vnd.jupyter.widget-view+json": {
167
+ "model_id": "32577879b38640f898e798ea8f88a801",
168
+ "version_major": 2,
169
+ "version_minor": 0
170
+ },
171
+ "text/plain": [
172
+ " 0%| | 0/299 [00:00<?, ?ba/s]"
173
+ ]
174
+ },
175
+ "metadata": {},
176
+ "output_type": "display_data"
177
+ },
178
+ {
179
+ "name": "stdout",
180
+ "output_type": "stream",
181
+ "text": [
182
+ "Max source length: 500\n"
183
+ ]
184
+ },
185
+ {
186
+ "data": {
187
+ "application/json": {
188
+ "ascii": false,
189
+ "bar_format": null,
190
+ "colour": null,
191
+ "elapsed": 0.011892318725585938,
192
+ "initial": 0,
193
+ "n": 0,
194
+ "ncols": null,
195
+ "nrows": null,
196
+ "postfix": null,
197
+ "prefix": "",
198
+ "rate": null,
199
+ "total": 299,
200
+ "unit": "ba",
201
+ "unit_divisor": 1000,
202
+ "unit_scale": false
203
+ },
204
+ "application/vnd.jupyter.widget-view+json": {
205
+ "model_id": "724cc7afe0ba49a3b8a6a763a189e380",
206
+ "version_major": 2,
207
+ "version_minor": 0
208
+ },
209
+ "text/plain": [
210
+ " 0%| | 0/299 [00:00<?, ?ba/s]"
211
+ ]
212
+ },
213
+ "metadata": {},
214
+ "output_type": "display_data"
215
+ },
216
+ {
217
+ "name": "stdout",
218
+ "output_type": "stream",
219
+ "text": [
220
+ "Max target length: 129\n"
221
+ ]
222
+ }
223
+ ],
224
+ "source": [
225
+ "from datasets import concatenate_datasets\n",
226
+ "import numpy as np\n",
227
+ "\n",
228
+ "\n",
229
+ "# The maximum total input sequence length after tokenization. \n",
230
+ "# Sequences longer than this will be truncated, sequences shorter will be padded.\n",
231
+ "tokenized_inputs = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[text_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])\n",
232
+ "max_source_length = max([len(x) for x in tokenized_inputs[\"input_ids\"]])\n",
233
+ "max_source_length = min(max_source_length, max_sample_length)\n",
234
+ "print(f\"Max source length: {max_source_length}\")\n",
235
+ "\n",
236
+ "# The maximum total sequence length for target text after tokenization. \n",
237
+ "# Sequences longer than this will be truncated, sequences shorter will be padded.\"\n",
238
+ "tokenized_targets = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[summary_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])\n",
239
+ "target_lenghts = [len(x) for x in tokenized_targets[\"input_ids\"]]\n",
240
+ "# use 95th percentile as max target length\n",
241
+ "max_target_length = int(np.percentile(target_lenghts, 95))\n",
242
+ "print(f\"Max target length: {max_target_length}\")"
243
+ ]
244
+ },
245
+ {
246
+ "attachments": {},
247
+ "cell_type": "markdown",
248
+ "metadata": {},
249
+ "source": [
250
+ "现在一切准备就绪,可以处理数据集了。"
251
+ ]
252
+ },
253
+ {
254
+ "cell_type": "code",
255
+ "execution_count": null,
256
+ "metadata": {},
257
+ "outputs": [],
258
+ "source": [
259
+ "import os\n",
260
+ "\n",
261
+ "def preprocess_function(sample, padding=\"max_length\"):\n",
262
+ " # created prompted input\n",
263
+ " inputs = [prompt_template.format(input=item) for item in sample[text_column]]\n",
264
+ "\n",
265
+ " # tokenize inputs\n",
266
+ " model_inputs = tokenizer(inputs, max_length=tokenizer.model_max_length, padding=padding, truncation=True)\n",
267
+ "\n",
268
+ " # Tokenize targets with the `text_target` keyword argument\n",
269
+ " labels = tokenizer(text_target=sample[summary_column], max_length=max_target_length, padding=padding, truncation=True)\n",
270
+ "\n",
271
+ " # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\n",
272
+ " # padding in the loss.\n",
273
+ " if padding == \"max_length\":\n",
274
+ " labels[\"input_ids\"] = [\n",
275
+ " [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\n",
276
+ " ]\n",
277
+ "\n",
278
+ " model_inputs[\"labels\"] = labels[\"input_ids\"]\n",
279
+ " return model_inputs\n",
280
+ "\n",
281
+ "# process dataset\n",
282
+ "tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=list(dataset[\"train\"].features))\n",
283
+ "\n",
284
+ "# save dataset to disk\n",
285
+ "tokenized_dataset[\"train\"].save_to_disk(os.path.join(save_dataset_path,\"train\"))\n",
286
+ "tokenized_dataset[\"test\"].save_to_disk(os.path.join(save_dataset_path,\"eval\"))"
287
+ ]
288
+ },
289
+ {
290
+ "attachments": {},
291
+ "cell_type": "markdown",
292
+ "metadata": {},
293
+ "source": [
294
+ "## 使用 `deepspeed` 微调模型\n",
295
+ "\n",
296
+ "准备完毕!我们现在可以开始训练模型了!如前所述,我们将使用集成了 DeepSpeed 的 Hugging Face Trainer。因此我们需要创建一个 `deespeed_config.json`。[DeepSpeed 配置](https://www.deepspeed.ai/docs/config-json/) 定义了要使用的 ZeRO 策略以及是否要使用混合精度训练等配置项。 Hugging Face Trainer 允许我们从 `deepspeed_config.json` 中的 `TrainingArguments` 继承相关配置以避免重复设置,查看[文档了解更多信息](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration)。\n",
297
+ "\n",
298
+ "我们创建了 4 组 deepspeed 配置文件用于实验,包括 `CPU 卸载`和`混合精度`:\n",
299
+ "\n",
300
+ "- [ds_flan_t5_z3_config.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config.json)\n",
301
+ "- [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json)\n",
302
+ "- [ds_flan_t5_z3_offload.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_offload.json)\n",
303
+ "- [ds_flan_t5_z3_offload_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_offload_bf16.json)\n",
304
+ "\n",
305
+ "你可以根据你的运行环境选择,例如如果在 NVIDIA V100s 上运行,你就不能使用带 `bf16` 的配置,因为 V100 不支持 `bfloat16` 数据类型。\n",
306
+ "\n",
307
+ "> 在微调 `T5` 模型时,不能使用 `fp16`,因为它会导致精度溢出问题,参见:[#4586](https://github.com/huggingface/transformers/issues/4586),[#10830](https://github.com/huggingface/transformers/issues/10830), [#10956](https://github.com/huggingface/transformers/pull/10956)\n",
308
+ ">\n",
309
+ "\n",
310
+ "如开头所述,我们使用的是 p4dn.24xlarge AWS EC2 实例,该实例包含 8 张显存为 40GB 的 NVIDIA A100。这意味着我们可以使用 `bf16`,它将减少近一半的模型显存占用,使我们能够在不卸载的情况下高效训练。\n",
311
+ "\n",
312
+ "我们将使用 [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json)。如果你不想用 `auto` 值,可以查看 [文档](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration)。"
313
+ ]
314
+ },
315
+ {
316
+ "attachments": {},
317
+ "cell_type": "markdown",
318
+ "metadata": {},
319
+ "source": [
320
+ "```\n",
321
+ "{\n",
322
+ " \"bf16\": {\n",
323
+ " \"enabled\": \"auto\"\n",
324
+ " },\n",
325
+ " \"optimizer\": {\n",
326
+ " \"type\": \"AdamW\",\n",
327
+ " \"params\": {\n",
328
+ " \"lr\": \"auto\",\n",
329
+ " \"betas\": \"auto\",\n",
330
+ " \"eps\": \"auto\",\n",
331
+ " \"weight_decay\": \"auto\"\n",
332
+ " }\n",
333
+ " },\n",
334
+ " \"scheduler\": {\n",
335
+ " \"type\": \"WarmupLR\",\n",
336
+ " \"params\": {\n",
337
+ " \"warmup_min_lr\": \"auto\",\n",
338
+ " \"warmup_max_lr\": \"auto\",\n",
339
+ " \"warmup_num_steps\": \"auto\"\n",
340
+ " }\n",
341
+ " },\n",
342
+ " \"zero_optimization\": {\n",
343
+ " \"stage\": 3,\n",
344
+ " \"overlap_comm\": true,\n",
345
+ " \"contiguous_gradients\": true,\n",
346
+ " \"sub_group_size\": 1e9,\n",
347
+ " \"reduce_bucket_size\": \"auto\",\n",
348
+ " \"stage3_prefetch_bucket_size\": \"auto\",\n",
349
+ " \"stage3_param_persistence_threshold\": \"auto\",\n",
350
+ " \"stage3_max_live_parameters\": 1e9,\n",
351
+ " \"stage3_max_reuse_distance\": 1e9,\n",
352
+ " \"stage3_gather_16bit_weights_on_model_save\": false\n",
353
+ " },\n",
354
+ " \"gradient_accumulation_steps\": \"auto\",\n",
355
+ " \"gradient_clipping\": \"auto\",\n",
356
+ " \"steps_per_print\": 2000,\n",
357
+ " \"train_batch_size\": \"auto\",\n",
358
+ " \"train_micro_batch_size_per_gpu\": \"auto\",\n",
359
+ " \"wall_clock_breakdown\": false\n",
360
+ "}\n",
361
+ "```"
362
+ ]
363
+ },
364
+ {
365
+ "attachments": {},
366
+ "cell_type": "markdown",
367
+ "metadata": {},
368
+ "source": [
369
+ "现在,该训练脚本上场了。我们根据[之前的博文](https://www.philschmid.de/fine-tune-flan-t5)准备了一个 [run_seq2seq_deepspeed.py](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/scripts/run_seq2seq_deepspeed.py) 训练脚本,它支持我们配置 deepspeed 和其他超参数,包括 `google/flan-t5-xxl` 的模型 ID。\n",
370
+ "\n",
371
+ "我们使用 `deepspeed` 启动器触发训练,输入给启动器的参数包括 GPU 数量、deepspeed 配置及其它超参数(如 `google/flan-t5-xxl` 的模型 ID)。"
372
+ ]
373
+ },
374
+ {
375
+ "cell_type": "code",
376
+ "execution_count": 16,
377
+ "metadata": {},
378
+ "outputs": [
379
+ {
380
+ "name": "stdout",
381
+ "output_type": "stream",
382
+ "text": [
383
+ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
384
+ "To disable this warning, you can either:\n",
385
+ "\t- Avoid using `tokenizers` before the fork if possible\n",
386
+ "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
387
+ "deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py --model_id google/flan-t5-xxl --dataset_path data --epochs 3 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --generation_max_length 129 --lr 1e-4 --deepspeed configs/ds_flan_t5_z3_config_bf16.json\n"
388
+ ]
389
+ }
390
+ ],
391
+ "source": [
392
+ "!deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py \\\n",
393
+ " --model_id $model_id \\\n",
394
+ " --dataset_path $save_dataset_path \\\n",
395
+ " --epochs 3 \\\n",
396
+ " --per_device_train_batch_size 8 \\\n",
397
+ " --per_device_eval_batch_size 8 \\\n",
398
+ " --generation_max_length $max_target_length \\\n",
399
+ " --lr 1e-4 \\\n",
400
+ " --deepspeed configs/ds_flan_t5_z3_config_bf16.json "
401
+ ]
402
+ },
403
+ {
404
+ "attachments": {},
405
+ "cell_type": "markdown",
406
+ "metadata": {},
407
+ "source": [
408
+ "DeepSpeed 先将模型加载到 CPU 上,然后将其拆分到 8 张 A100 上然后开始训练。使用 [CNN Dailymail 数据集](https://huggingface.co/datasets/cnn_dailymail)进行训练大约需要 10 个小时,费用约为 `322 美元`。"
409
+ ]
410
+ },
411
+ {
412
+ "attachments": {},
413
+ "cell_type": "markdown",
414
+ "metadata": {},
415
+ "source": [
416
+ "# 结果与实验\n",
417
+ "\n",
418
+ "为了更好地了解硬件要求,我们对 FLAN-T5 XL 和 XXL 进行了一系列实验,以帮助我们评估和了解硬件需求以及训练这些模型的成本。\n",
419
+ "\n",
420
+ "下表列出了实验和相关设置的详细信息。\n",
421
+ "\n",
422
+ "数据集: `\"cnn_dailymail\"`\n",
423
+ "- 训练样本数: `287113`\n",
424
+ "- 验证样本数: `13368`\n",
425
+ "\n",
426
+ "超参:\n",
427
+ "- epochs: `3`\n",
428
+ "- 学习率: `1e-4`\n",
429
+ "\n",
430
+ "运行环境设置: \n",
431
+ "- 4x V100 16GB: p3.8xlarge\n",
432
+ "- 4x A10G 24GB: g5.24xlarge\n",
433
+ "- 8x V100 16GB: p3.16xlarge\n",
434
+ "- 8x A100 40GB: p4dn.24xlarge\n",
435
+ "\n",
436
+ "\n",
437
+ "| 模型 | DeepSpeed 卸载 | 硬件 | GPU每卡batch size | 精度 | 时长 | 费用 |\n",
438
+ "|-------------------|------------|--------------|--------------------|-----------|----------|--------|\n",
439
+ "| FLAN-T5-XL (3B) | No | 4x V100 16GB | OOM | fp32 | - | - |\n",
440
+ "| FLAN-T5-XL (3B) | No | 8x V100 16GB | 1 | fp32 | 105h | ~$2570 |\n",
441
+ "| FLAN-T5-XL (3B) | No | 8x A100 40GB | 72 | bf16 | 2.5h | ~$81 |\n",
442
+ "| FLAN-T5-XL (3B) | Yes | 4x V100 16GB | 8 | fp32 | 69h | ~$828 |\n",
443
+ "| FLAN-T5-XL (3B) | Yes | 8x V100 16GB | 8 | fp32 | 32h | ~$768 |\n",
444
+ "| FLAN-T5-XXL (11B) | No | 8x A100 40GB | 8 | bf16 | 10h | ~$322 |\n",
445
+ "| FLAN-T5-XXL (11B) | Yes | 4x V100 16GB | OOM | fp32 | - | - |\n",
446
+ "| FLAN-T5-XXL (11B) | Yes | 8x V100 16GB | OOM | fp32 | - | - |\n",
447
+ "| FLAN-T5-XXL (11B) | Yes | 4x A10G 24GB | 24 | bf16 | 90h | ~$732 |\n",
448
+ "| FLAN-T5-XXL (11B) | Yes | 8x A100 40GB | 48 | bf16 | 19h | ~$613 |\n",
449
+ "\n",
450
+ "我们可以看到 `bf16` 与 `fp32` 相比具有显著优势。FLAN-T5-XXL 能放进 4 张 A10G (24GB),但放不进 8 张 V100 16GB。\n",
451
+ "\n",
452
+ "我们的实验还表明,如果模型可以无需卸载同时以 batch size 大于 4 的配置跑在 GPU 上,其速度将比卸载模型和减小 batch size 的配置快约 2 倍且更具成本效益。"
453
+ ]
454
+ },
455
+ {
456
+ "attachments": {},
457
+ "cell_type": "markdown",
458
+ "metadata": {},
459
+ "source": [
460
+ "> 英文原文: <url> https://www.philschmid.de/fine-tune-flan-t5-deepspeed </url>\n",
461
+ "> 原文作者:Philipp Schmid\n",
462
+ "> 译者: Matrix Yao (姚伟峰),英特尔深度学习工程师,工作方向为 transformer-family 模型在各模态数据上的应用及大规模模型的训练推理。"
463
+ ]
464
+ },
465
+ {
466
+ "cell_type": "markdown",
467
+ "metadata": {},
468
+ "source": []
469
+ }
470
+ ],
471
+ "metadata": {
472
+ "kernelspec": {
473
+ "display_name": "Python 3",
474
+ "language": "python",
475
+ "name": "python3"
476
+ },
477
+ "language_info": {
478
+ "codemirror_mode": {
479
+ "name": "ipython",
480
+ "version": 3
481
+ },
482
+ "file_extension": ".py",
483
+ "mimetype": "text/x-python",
484
+ "name": "python",
485
+ "nbconvert_exporter": "python",
486
+ "pygments_lexer": "ipython3",
487
+ "version": "3.6.8"
488
+ },
489
+ "orig_nbformat": 4,
490
+ "vscode": {
491
+ "interpreter": {
492
+ "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
493
+ }
494
+ }
495
+ },
496
+ "nbformat": 4,
497
+ "nbformat_minor": 2
498
+ }