Official repository: seonghyeonye/Flipped-Learning
Model Description
DIRECT is a strong baseline of FLIPPED, based on the training objective on T0-3B. With only 5% token updates and half of training datasets compared to T0-3B, DIRECT outperforms T0-3B. (+6.38% mean accuracy on 14 NLP tasks, +1.19% mean accuracy on 14 BIG-bench tasks)
How to use
Our overall explanation models along with ablations can be found in our paper. We recommend using the FLIPPED-11B checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
Model | Number of parameters |
---|---|
Flipped_11B | 11 billion |
Flipped_3B | 3 billion |
Here is how to download the model in PyTorch: |
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/direct_3B")
tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/direct_3B")
If you want to use another checkpoint, please replace the path in T5Tokenizer
and T5ForConditionalGeneration
.
We also provide a quick Jupyter Notebook where you can inference with our method.
Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.
Training procedure
DIRECT model is based on T5+LM, a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective additionally pretrained on language modeling objective on C4. Training details:
- Fine-tuning steps: 5'000
- Input sequence length: 512
- Target sequence length: 128
- Batch size: 240
- Optimizer: Adafactor
- Learning rate: 1e-4
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears num_examples/num_templates while training.)
Training data
We trained different variants T0 with different mixtures of datasets.
Model | Training datasets |
---|---|
FLIPPED_11B | - Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ - Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp - Topic Classification: AG News, DBPedia - Paraphrase Identification: MRPC, PAWS, QQP |
FLIPPED_3B | Same as FLIPPED-11B |
DIRECT_3B | Same as FLIPPED-11B |
We only choose prompts examples that has output lables, which can be found on the dataset page. |
Evaluation data
We evaluate our models on following datasets:
Task category | Datasets |
---|---|
Natural language inference | ANLI(R1, R2, R3), CB, RTE |
Coreference resolution | WSC, Winogrande |
Word sense disambiguation | WiC |
Sentence completion | COPA, HellaSwag, Story Cloze |
QA | PIQA, ARC-Challenge, OpenbookQA |
We also evaluate FLIPPED on a subset of BIG-bench benchmark: |
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
Label generalization
We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our paper.
Task category | (Datasets, Template name) |
---|---|
Unseen tasks | (WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource) |
Seen tasks | (IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) |
The template name we used can be found in the promptsource template library. |
BibTeX entry and citation info
@article{ye2022guess,
title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners},
author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
journal={arXiv preprint arXiv:2210.02969},
year={2022}
}
- Downloads last month
- 7