chenglu commited on
Commit
12157f8
1 Parent(s): 9e4df25

Upload 2023-03-23-fine-tune-flan-t5-peft.ipynb

Browse files
2023-03-23-fine-tune-flan-t5-peft.ipynb ADDED
@@ -0,0 +1,541 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "attachments": {},
5
+ "cell_type": "markdown",
6
+ "metadata": {},
7
+ "source": [
8
+ "# 使用 LoRA 和 Hugging Face 高效训练大语言模型\n",
9
+ "\n",
10
+ "在本文中,我们将展示如何使用[大语言模型低秩适配(Low-Rank Adaptation of Large Language Models,LoRA)](https://arxiv.org/abs/2106.09685)技术在单 GPU 上微调 110 亿参数的 FLAN-T5 XXL 模型。在此过程中,我们会使用到 Hugging Face 的 [Transformers](https://huggingface.co/docs/transformers/index)、[Accelerate](https://huggingface.co/docs/accelerate/index) 和 [PEFT](https://github.com/huggingface/peft) 库。\n",
11
+ "\n",
12
+ "通过本文,你会学到:\n",
13
+ "\n",
14
+ "1. 如何搭建开发环境\n",
15
+ "2. 如何加载并准备数据集\n",
16
+ "3. 如何使用 LoRA 和 bnb(即bitsandbytes) int-8 微调 T5\n",
17
+ "4. 如何评估 LoRA FLAN-T5 并将其用于推理\n",
18
+ "5. 如何比较不同方案的性价比\n",
19
+ "\n",
20
+ "### 快速入门:轻量化微调(Parameter Efficient Fine-Tuning,PEFT)\n",
21
+ "\n",
22
+ "[PEFT](https://github.com/huggingface/peft) 是 Hugging Face 的一个新的开源库。使用 PEFT 库,无需微调模型的全部参数,即可高效地将预训练语言模型 (Pre-trained Language Model,PLM) 适配到各种下游应用。PEFT 目前支持以下几种方法:\n",
23
+ "\n",
24
+ "- LoRA:[LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/pdf/2106.09685.pdf)\n",
25
+ "- Prefix Tuning:[P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf)\n",
26
+ "- P-Tuning:[GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf)\n",
27
+ "- Prompt Tuning:[The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691.pdf)\n",
28
+ "\n",
29
+ "*注意:本教程是在 g5.2xlarge AWS EC2 实例上创建和运行的,该实例包含 1 个 NVIDIA A10G。*"
30
+ ]
31
+ },
32
+ {
33
+ "attachments": {},
34
+ "cell_type": "markdown",
35
+ "metadata": {},
36
+ "source": [
37
+ "## 1. 搭建开发环境\n",
38
+ "\n",
39
+ "在本例中,我们使用 AWS 预置的 [PyTorch 深度学习 AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-pytorch.html),其已安装了正确的 CUDA 驱动程序和 PyTorch。在此基础上,我们还需要安装一些 Hugging Face 库,包括 transformers 和 datasets。运行下面的代码就可安装所有需要的包。"
40
+ ]
41
+ },
42
+ {
43
+ "cell_type": "code",
44
+ "execution_count": null,
45
+ "metadata": {},
46
+ "outputs": [],
47
+ "source": [
48
+ "# install Hugging Face Libraries\n",
49
+ "!pip install git+https://github.com/huggingface/peft.git\n",
50
+ "!pip install \"transformers==4.27.1\" \"datasets==2.9.0\" \"accelerate==0.17.1\" \"evaluate==0.4.0\" \"bitsandbytes==0.37.1\" loralib --upgrade --quiet\n",
51
+ "# install additional dependencies needed for training\n",
52
+ "!pip install rouge-score tensorboard py7zr "
53
+ ]
54
+ },
55
+ {
56
+ "attachments": {},
57
+ "cell_type": "markdown",
58
+ "metadata": {},
59
+ "source": [
60
+ "## 2.加载并准备数据集\n",
61
+ "\n",
62
+ "这里,我们使用 [samsum](https://huggingface.co/datasets/samsum) 数据集,该数据集包含大约 16k 个含摘要的聊天类对话数据。这些对话由精通英语的语言学家制作。\n",
63
+ "\n",
64
+ "```python\n",
65
+ "{\n",
66
+ " \"id\": \"13818513\",\n",
67
+ " \"summary\": \"Amanda baked cookies and will bring Jerry some tomorrow.\",\n",
68
+ " \"dialogue\": \"Amanda: I baked cookies. Do you want some?\\r\\nJerry: Sure!\\r\\nAmanda: I'll bring you tomorrow :-)\"\n",
69
+ "}\n",
70
+ "```\n",
71
+ "\n",
72
+ "我们使用 🤗 Datasets 库中的 *​`load_dataset()`* 方法来加载 `samsum` 数据集。"
73
+ ]
74
+ },
75
+ {
76
+ "cell_type": "code",
77
+ "execution_count": null,
78
+ "metadata": {},
79
+ "outputs": [],
80
+ "source": [
81
+ "from datasets import load_dataset\n",
82
+ "\n",
83
+ "# Load dataset from the hub\n",
84
+ "dataset = load_dataset(\"samsum\")\n",
85
+ "\n",
86
+ "print(f\"Train dataset size: {len(dataset['train'])}\")\n",
87
+ "print(f\"Test dataset size: {len(dataset['test'])}\")\n",
88
+ "\n",
89
+ "# Train dataset size: 14732\n",
90
+ "# Test dataset size: 819"
91
+ ]
92
+ },
93
+ {
94
+ "attachments": {},
95
+ "cell_type": "markdown",
96
+ "metadata": {},
97
+ "source": [
98
+ "为了训练模型,我们要用 🤗 Transformers Tokenizer 将输入文本转换为词元 ID。如果你需要了解这一方面的知识,请移步 Hugging Face 课程的 **[第 6 章](https://huggingface.co/course/chapter6/1?fw=tf)**。"
99
+ ]
100
+ },
101
+ {
102
+ "cell_type": "code",
103
+ "execution_count": null,
104
+ "metadata": {},
105
+ "outputs": [],
106
+ "source": [
107
+ "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n",
108
+ "\n",
109
+ "model_id=\"google/flan-t5-xxl\"\n",
110
+ "\n",
111
+ "# Load tokenizer of FLAN-t5-XL\n",
112
+ "tokenizer = AutoTokenizer.from_pretrained(model_id)"
113
+ ]
114
+ },
115
+ {
116
+ "attachments": {},
117
+ "cell_type": "markdown",
118
+ "metadata": {},
119
+ "source": [
120
+ "在开始训练之前,我们还需要对数据进行预处理。生成式文本摘要属于文本生成任务。我们将文本输入给模型,模型会输出摘要。我们需要了解输入和输出文本的长度信息,以利于我们高效地批量处理这些数据。"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": null,
126
+ "metadata": {},
127
+ "outputs": [],
128
+ "source": [
129
+ "from datasets import concatenate_datasets\n",
130
+ "import numpy as np\n",
131
+ "# The maximum total input sequence length after tokenization. \n",
132
+ "# Sequences longer than this will be truncated, sequences shorter will be padded.\n",
133
+ "tokenized_inputs = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[\"dialogue\"], truncation=True), batched=True, remove_columns=[\"dialogue\", \"summary\"])\n",
134
+ "input_lenghts = [len(x) for x in tokenized_inputs[\"input_ids\"]]\n",
135
+ "# take 85 percentile of max length for better utilization\n",
136
+ "max_source_length = int(np.percentile(input_lenghts, 85))\n",
137
+ "print(f\"Max source length: {max_source_length}\")\n",
138
+ "\n",
139
+ "# The maximum total sequence length for target text after tokenization. \n",
140
+ "# Sequences longer than this will be truncated, sequences shorter will be padded.\"\n",
141
+ "tokenized_targets = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[\"summary\"], truncation=True), batched=True, remove_columns=[\"dialogue\", \"summary\"])\n",
142
+ "target_lenghts = [len(x) for x in tokenized_targets[\"input_ids\"]]\n",
143
+ "# take 90 percentile of max length for better utilization\n",
144
+ "max_target_length = int(np.percentile(target_lenghts, 90))\n",
145
+ "print(f\"Max target length: {max_target_length}\")"
146
+ ]
147
+ },
148
+ {
149
+ "attachments": {},
150
+ "cell_type": "markdown",
151
+ "metadata": {},
152
+ "source": [
153
+ "我们将在训练前统一对数据集进行预处理并将预处理后的数据集保存到磁盘。你可以在本地机器或 CPU 上运行此步骤并将其上传到 [Hugging Face Hub](https://huggingface.co/docs/hub/datasets-overview)。"
154
+ ]
155
+ },
156
+ {
157
+ "cell_type": "code",
158
+ "execution_count": null,
159
+ "metadata": {},
160
+ "outputs": [],
161
+ "source": [
162
+ "def preprocess_function(sample,padding=\"max_length\"):\n",
163
+ " # add prefix to the input for t5\n",
164
+ " inputs = [\"summarize: \" + item for item in sample[\"dialogue\"]]\n",
165
+ "\n",
166
+ " # tokenize inputs\n",
167
+ " model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)\n",
168
+ "\n",
169
+ " # Tokenize targets with the `text_target` keyword argument\n",
170
+ " labels = tokenizer(text_target=sample[\"summary\"], max_length=max_target_length, padding=padding, truncation=True)\n",
171
+ "\n",
172
+ " # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\n",
173
+ " # padding in the loss.\n",
174
+ " if padding == \"max_length\":\n",
175
+ " labels[\"input_ids\"] = [\n",
176
+ " [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\n",
177
+ " ]\n",
178
+ "\n",
179
+ " model_inputs[\"labels\"] = labels[\"input_ids\"]\n",
180
+ " return model_inputs\n",
181
+ "\n",
182
+ "tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=[\"dialogue\", \"summary\", \"id\"])\n",
183
+ "print(f\"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}\")\n",
184
+ "\n",
185
+ "# save datasets to disk for later easy loading\n",
186
+ "tokenized_dataset[\"train\"].save_to_disk(\"data/train\")\n",
187
+ "tokenized_dataset[\"test\"].save_to_disk(\"data/eval\")"
188
+ ]
189
+ },
190
+ {
191
+ "attachments": {},
192
+ "cell_type": "markdown",
193
+ "metadata": {},
194
+ "source": [
195
+ "## 3. 使用 LoRA 和 bnb int-8 微调 T5\n",
196
+ "\n",
197
+ "除了 LoRA 技术,我们还使用 [bitsanbytes LLM.int8()](https://huggingface.co/blog/hf-bitsandbytes-integration) 把冻结的 LLM 量化为 int8。这使我们能够将 FLAN-T5 XXL 所需的内存降低到约四分之一。\n",
198
+ "\n",
199
+ "训练的第一步是加载模型。我们使用 [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) 模型,它是 [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) 的分片版。分片可以让我们在加载模型时不耗尽内存。"
200
+ ]
201
+ },
202
+ {
203
+ "cell_type": "code",
204
+ "execution_count": null,
205
+ "metadata": {},
206
+ "outputs": [],
207
+ "source": [
208
+ "from transformers import AutoModelForSeq2SeqLM\n",
209
+ "\n",
210
+ "# huggingface hub model id\n",
211
+ "model_id = \"philschmid/flan-t5-xxl-sharded-fp16\"\n",
212
+ "\n",
213
+ "# load model from the hub\n",
214
+ "model = AutoModelForSeq2SeqLM.from_pretrained(model_id, load_in_8bit=True, device_map=\"auto\")"
215
+ ]
216
+ },
217
+ {
218
+ "attachments": {},
219
+ "cell_type": "markdown",
220
+ "metadata": {},
221
+ "source": [
222
+ "现在,我们可以使用 `peft` 为 LoRA int-8 训练作准备了。"
223
+ ]
224
+ },
225
+ {
226
+ "cell_type": "code",
227
+ "execution_count": null,
228
+ "metadata": {},
229
+ "outputs": [],
230
+ "source": [
231
+ "from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType\n",
232
+ "\n",
233
+ "# Define LoRA Config \n",
234
+ "lora_config = LoraConfig(\n",
235
+ " r=16, \n",
236
+ " lora_alpha=32,\n",
237
+ " target_modules=[\"q\", \"v\"],\n",
238
+ " lora_dropout=0.05,\n",
239
+ " bias=\"none\",\n",
240
+ " task_type=TaskType.SEQ_2_SEQ_LM\n",
241
+ ")\n",
242
+ "# prepare int-8 model for training\n",
243
+ "model = prepare_model_for_int8_training(model)\n",
244
+ "\n",
245
+ "# add LoRA adaptor\n",
246
+ "model = get_peft_model(model, lora_config)\n",
247
+ "model.print_trainable_parameters()\n",
248
+ "\n",
249
+ "# trainable params: 18874368 || all params: 11154206720 || trainable%: 0.16921300163961817"
250
+ ]
251
+ },
252
+ {
253
+ "attachments": {},
254
+ "cell_type": "markdown",
255
+ "metadata": {},
256
+ "source": [
257
+ "如你所见,这里我们只训练了模型参数的 0.16%!这个巨大的内存增益让我们安心地微调模型,而不用担心内存问题。\n",
258
+ "\n",
259
+ "接下来需要创建一个 `DataCollat​​or`,负责对输入和标签进行填充,我们使用 🤗 Transformers 库中的`DataCollat​​orForSeq2Seq` 来完成这一环节。"
260
+ ]
261
+ },
262
+ {
263
+ "cell_type": "code",
264
+ "execution_count": null,
265
+ "metadata": {},
266
+ "outputs": [],
267
+ "source": [
268
+ "from transformers import DataCollatorForSeq2Seq\n",
269
+ "\n",
270
+ "# we want to ignore tokenizer pad token in the loss\n",
271
+ "label_pad_token_id = -100\n",
272
+ "# Data collator\n",
273
+ "data_collator = DataCollatorForSeq2Seq(\n",
274
+ " tokenizer,\n",
275
+ " model=model,\n",
276
+ " label_pad_token_id=label_pad_token_id,\n",
277
+ " pad_to_multiple_of=8\n",
278
+ ")"
279
+ ]
280
+ },
281
+ {
282
+ "attachments": {},
283
+ "cell_type": "markdown",
284
+ "metadata": {},
285
+ "source": [
286
+ "最后一步是定义训练超参 (`TrainingArguments`)。"
287
+ ]
288
+ },
289
+ {
290
+ "cell_type": "code",
291
+ "execution_count": null,
292
+ "metadata": {},
293
+ "outputs": [],
294
+ "source": [
295
+ "from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments\n",
296
+ "\n",
297
+ "output_dir=\"lora-flan-t5-xxl\"\n",
298
+ "\n",
299
+ "# Define training args\n",
300
+ "training_args = Seq2SeqTrainingArguments(\n",
301
+ " output_dir=output_dir,\n",
302
+ "\t\tauto_find_batch_size=True,\n",
303
+ " learning_rate=1e-3, # higher learning rate\n",
304
+ " num_train_epochs=5,\n",
305
+ " logging_dir=f\"{output_dir}/logs\",\n",
306
+ " logging_strategy=\"steps\",\n",
307
+ " logging_steps=500,\n",
308
+ " save_strategy=\"no\",\n",
309
+ " report_to=\"tensorboard\",\n",
310
+ ")\n",
311
+ "\n",
312
+ "# Create Trainer instance\n",
313
+ "trainer = Seq2SeqTrainer(\n",
314
+ " model=model,\n",
315
+ " args=training_args,\n",
316
+ " data_collator=data_collator,\n",
317
+ " train_dataset=tokenized_dataset[\"train\"],\n",
318
+ ")\n",
319
+ "model.config.use_cache = False # silence the warnings. Please re-enable for inference!"
320
+ ]
321
+ },
322
+ {
323
+ "attachments": {},
324
+ "cell_type": "markdown",
325
+ "metadata": {},
326
+ "source": [
327
+ "运行下面的代码,开始训练模型。请注意,对于 T5,出于收敛稳定性考量,某些层我们仍保持 `float32` 精度。"
328
+ ]
329
+ },
330
+ {
331
+ "cell_type": "code",
332
+ "execution_count": null,
333
+ "metadata": {},
334
+ "outputs": [],
335
+ "source": [
336
+ "# train model\n",
337
+ "trainer.train()"
338
+ ]
339
+ },
340
+ {
341
+ "attachments": {},
342
+ "cell_type": "markdown",
343
+ "metadata": {},
344
+ "source": [
345
+ "训练耗时约 10 小时 36 分钟,训练 10 小时的成本约为 `13.22 美元`。相比之下,如果[在 FLAN-T5-XXL 上进行全模型微调](https://www.philschmid.de/fine-tune-flan-t5-deepspeed#3-results--experiments) 10 个小时,我们需要 8 个 A100 40GB,成本约为 322 美元。\n",
346
+ "\n",
347
+ "我们可以将模型保存下来以用于后面的推理和评估。我们暂时将其保存到磁盘,但你也可以使用 `model.push_to_hub` 方法将其上传到 [Hugging Face Hub](https://huggingface.co/docs/hub/main)。"
348
+ ]
349
+ },
350
+ {
351
+ "cell_type": "code",
352
+ "execution_count": null,
353
+ "metadata": {},
354
+ "outputs": [],
355
+ "source": [
356
+ "# Save our LoRA model & tokenizer results\n",
357
+ "peft_model_id=\"results\"\n",
358
+ "trainer.model.save_pretrained(peft_model_id)\n",
359
+ "tokenizer.save_pretrained(peft_model_id)\n",
360
+ "# if you want to save the base model to call\n",
361
+ "# trainer.model.base_model.save_pretrained(peft_model_id)"
362
+ ]
363
+ },
364
+ {
365
+ "attachments": {},
366
+ "cell_type": "markdown",
367
+ "metadata": {},
368
+ "source": [
369
+ "最后生成的 LoRA checkpoint 文件很小,仅需 84MB 就包含了从 `samsum` 数据集上学到的所有知识。\n",
370
+ "\n",
371
+ "## 4. 使用 LoRA FLAN-T5 进行评估和推理\n",
372
+ "\n",
373
+ "我们将使用 `evaluate` 库来评估 `rogue` 分数。我们可以使用 `PEFT` 和 `transformers` 来对 FLAN-T5 XXL 模型进行推理。对 FLAN-T5 XXL 模型,我们至少需要 18GB 的​​ GPU 显存。"
374
+ ]
375
+ },
376
+ {
377
+ "cell_type": "code",
378
+ "execution_count": null,
379
+ "metadata": {},
380
+ "outputs": [],
381
+ "source": [
382
+ "import torch\n",
383
+ "from peft import PeftModel, PeftConfig\n",
384
+ "from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\n",
385
+ "\n",
386
+ "# Load peft config for pre-trained checkpoint etc. \n",
387
+ "peft_model_id = \"results\"\n",
388
+ "config = PeftConfig.from_pretrained(peft_model_id)\n",
389
+ "\n",
390
+ "# load base LLM model and tokenizer\n",
391
+ "model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={\"\":0})\n",
392
+ "tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)\n",
393
+ "\n",
394
+ "# Load the Lora model\n",
395
+ "model = PeftModel.from_pretrained(model, peft_model_id, device_map={\"\":0})\n",
396
+ "model.eval()\n",
397
+ "\n",
398
+ "print(\"Peft model loaded\")"
399
+ ]
400
+ },
401
+ {
402
+ "attachments": {},
403
+ "cell_type": "markdown",
404
+ "metadata": {},
405
+ "source": [
406
+ "我们用测试数据集中的一个随机样本来试试摘要效果。"
407
+ ]
408
+ },
409
+ {
410
+ "cell_type": "code",
411
+ "execution_count": null,
412
+ "metadata": {},
413
+ "outputs": [],
414
+ "source": [
415
+ "from datasets import load_dataset \n",
416
+ "from random import randrange\n",
417
+ "\n",
418
+ "\n",
419
+ "# Load dataset from the hub and get a sample\n",
420
+ "dataset = load_dataset(\"samsum\")\n",
421
+ "sample = dataset['test'][randrange(len(dataset[\"test\"]))]\n",
422
+ "\n",
423
+ "input_ids = tokenizer(sample[\"dialogue\"], return_tensors=\"pt\", truncation=True).input_ids.cuda()\n",
424
+ "# with torch.inference_mode():\n",
425
+ "outputs = model.generate(input_ids=input_ids, max_new_tokens=10, do_sample=True, top_p=0.9)\n",
426
+ "print(f\"input sentence: {sample['dialogue']}\\n{'---'* 20}\")\n",
427
+ "\n",
428
+ "print(f\"summary:\\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]}\")"
429
+ ]
430
+ },
431
+ {
432
+ "attachments": {},
433
+ "cell_type": "markdown",
434
+ "metadata": {},
435
+ "source": [
436
+ "不错!我们的模型有效!现在,让我们仔细看看,并使用 `test` 集中的全部数据对其进行评估。为此,我们需要实现一些工具函数来帮助生成摘要并将其与相应的参考摘要组合到一起。评估摘要任务最常用的指标是 [rogue_score](https://en.wikipedia.org/wiki/ROUGE_(metric)),它的全称是 Recall-Oriented Understudy for Gisting Evaluation。与常用的准确率指标不同,它将生成的摘要与一组参考摘要进行比较。"
437
+ ]
438
+ },
439
+ {
440
+ "cell_type": "code",
441
+ "execution_count": null,
442
+ "metadata": {},
443
+ "outputs": [],
444
+ "source": [
445
+ "import evaluate\n",
446
+ "import numpy as np\n",
447
+ "from datasets import load_from_disk\n",
448
+ "from tqdm import tqdm\n",
449
+ "\n",
450
+ "# Metric\n",
451
+ "metric = evaluate.load(\"rouge\")\n",
452
+ "\n",
453
+ "def evaluate_peft_model(sample,max_target_length=50):\n",
454
+ " # generate summary\n",
455
+ " outputs = model.generate(input_ids=sample[\"input_ids\"].unsqueeze(0).cuda(), do_sample=True, top_p=0.9, max_new_tokens=max_target_length) \n",
456
+ " prediction = tokenizer.decode(outputs[0].detach().cpu().numpy(), skip_special_tokens=True)\n",
457
+ " # decode eval sample\n",
458
+ " # Replace -100 in the labels as we can't decode them.\n",
459
+ " labels = np.where(sample['labels'] != -100, sample['labels'], tokenizer.pad_token_id)\n",
460
+ " labels = tokenizer.decode(labels, skip_special_tokens=True)\n",
461
+ "\n",
462
+ " # Some simple post-processing\n",
463
+ " return prediction, labels\n",
464
+ "\n",
465
+ "# load test dataset from distk\n",
466
+ "test_dataset = load_from_disk(\"data/eval/\").with_format(\"torch\")\n",
467
+ "\n",
468
+ "# run predictions\n",
469
+ "# this can take ~45 minutes\n",
470
+ "predictions, references = [] , []\n",
471
+ "for sample in tqdm(test_dataset):\n",
472
+ " p,l = evaluate_peft_model(sample)\n",
473
+ " predictions.append(p)\n",
474
+ " references.append(l)\n",
475
+ "\n",
476
+ "# compute metric \n",
477
+ "rogue = metric.compute(predictions=predictions, references=references, use_stemmer=True)\n",
478
+ "\n",
479
+ "# print results \n",
480
+ "print(f\"Rogue1: {rogue['rouge1']* 100:2f}%\")\n",
481
+ "print(f\"rouge2: {rogue['rouge2']* 100:2f}%\")\n",
482
+ "print(f\"rougeL: {rogue['rougeL']* 100:2f}%\")\n",
483
+ "print(f\"rougeLsum: {rogue['rougeLsum']* 100:2f}%\")\n",
484
+ "\n",
485
+ "# Rogue1: 50.386161%\n",
486
+ "# rouge2: 24.842412%\n",
487
+ "# rougeL: 41.370130%\n",
488
+ "# rougeLsum: 41.394230%"
489
+ ]
490
+ },
491
+ {
492
+ "attachments": {},
493
+ "cell_type": "markdown",
494
+ "metadata": {},
495
+ "source": [
496
+ "我们 PEFT 微调后的 ​​FLAN-T5-XXL 在测试集上取得了 `50.38%` 的 rogue1 分数。相比之下,[flan-t5-base 的全模型微调获得了 47.23 的 rouge1 分数](https://www.philschmid.de/fine-tune-flan-t5)。rouge1 分数提高了 `3%` 。\n",
497
+ "\n",
498
+ "令人难以置信的是,我们的 LoRA checkpoint 只有 84MB,而且性能比对更小的模型进行全模型微调后的 checkpoint 更好。"
499
+ ]
500
+ },
501
+ {
502
+ "attachments": {},
503
+ "cell_type": "markdown",
504
+ "metadata": {},
505
+ "source": [
506
+ "> 英文原文: <url> https://www.philschmid.de/fine-tune-flan-t5-peft </url>\n",
507
+ "\n",
508
+ "> 原文作者:Philipp Schmid\n",
509
+ "\n",
510
+ "> 译者: Matrix Yao (姚伟峰),英特尔深度学习工程师,工作方向为 transformer-family 模型在各模态数据上的应用及大规模模型的训练推理。"
511
+ ]
512
+ }
513
+ ],
514
+ "metadata": {
515
+ "kernelspec": {
516
+ "display_name": "pytorch",
517
+ "language": "python",
518
+ "name": "python3"
519
+ },
520
+ "language_info": {
521
+ "codemirror_mode": {
522
+ "name": "ipython",
523
+ "version": 3
524
+ },
525
+ "file_extension": ".py",
526
+ "mimetype": "text/x-python",
527
+ "name": "python",
528
+ "nbconvert_exporter": "python",
529
+ "pygments_lexer": "ipython3",
530
+ "version": "3.9.15"
531
+ },
532
+ "orig_nbformat": 4,
533
+ "vscode": {
534
+ "interpreter": {
535
+ "hash": "2d58e898dde0263bc564c6968b04150abacfd33eed9b19aaa8e45c040360e146"
536
+ }
537
+ }
538
+ },
539
+ "nbformat": 4,
540
+ "nbformat_minor": 2
541
+ }