gpt2-context_generator
This model is a fine-tuned version of gpt2 on Non-Residual-Prompting/C2Gen dataset.
Model description
More information needed
Intended uses & limitations
- Check config.json for prompt template and sampling strategy.
Dataset Summary
CommonGen Lin et al., 2020 is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen Carlsson et al., 2022 where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
Training procedure
- Causal Language Modelling
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 8
Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.2
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Isotonic/gpt2-context_generator
Base model
openai-community/gpt2