bart-large-cnn-samsum
If you want to use the model you should try a newer fine-tuned FLAN-T5 version philschmid/flan-t5-base-samsum out socring the BART version with
+6
onROGUE1
achieving47.24
.
TRY philschmid/flan-t5-base-samsum
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
For more information look at:
- π€ Transformers Documentation: Amazon SageMaker
- Example Notebooks
- Amazon SageMaker documentation for Hugging Face
- Python SDK SageMaker documentation for Hugging Face
- Deep Learning Container
Hyperparameters
{
"dataset_name": "samsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"learning_rate": 5e-05,
"model_name_or_path": "facebook/bart-large-cnn",
"num_train_epochs": 3,
"output_dir": "/opt/ml/model",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"seed": 7
}
Usage
from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/bart-large-cnn-samsum")
conversation = '''Jeff: Can I train a π€ Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
summarizer(conversation)
Results
key | value |
---|---|
eval_rouge1 | 42.621 |
eval_rouge2 | 21.9825 |
eval_rougeL | 33.034 |
eval_rougeLsum | 39.6783 |
test_rouge1 | 41.3174 |
test_rouge2 | 20.8716 |
test_rougeL | 32.1337 |
test_rougeLsum | 38.4149 |
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train ajaydahiya8822/ajayd1
Evaluation results
- Validation ROGUE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported42.621
- Validation ROGUE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported21.983
- Validation ROGUE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported33.034
- Test ROGUE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported41.317
- Test ROGUE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported20.872
- Test ROGUE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported32.134
- ROUGE-1 on samsumtest set self-reported41.328
- ROUGE-2 on samsumtest set self-reported20.875
- ROUGE-L on samsumtest set self-reported32.135
- ROUGE-LSUM on samsumtest set self-reported38.401