efederici commited on
Commit
120aeea
1 Parent(s): 9e4b0d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -26
README.md CHANGED
@@ -1,23 +1,22 @@
1
  ---
2
- language:
3
- - it_IT
4
- - it_IT
5
  tags:
6
- - generated_from_trainer
 
 
7
  metrics:
8
  - rouge
9
  model-index:
10
  - name: summarization_mbart_fanpage4epoch
11
  results: []
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # summarization_mbart_fanpage4epoch
18
 
19
- This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
  - Loss: 2.1833
22
  - Rouge1: 36.5027
23
  - Rouge2: 17.4428
@@ -25,19 +24,13 @@ It achieves the following results on the evaluation set:
25
  - Rougelsum: 30.2636
26
  - Gen Len: 75.2413
27
 
28
- ## Model description
29
-
30
- More information needed
31
-
32
- ## Intended uses & limitations
33
-
34
- More information needed
35
 
36
- ## Training and evaluation data
37
-
38
- More information needed
39
-
40
- ## Training procedure
41
 
42
  ### Training hyperparameters
43
 
@@ -50,13 +43,9 @@ The following hyperparameters were used during training:
50
  - lr_scheduler_type: linear
51
  - num_epochs: 4.0
52
 
53
- ### Training results
54
-
55
-
56
-
57
  ### Framework versions
58
 
59
  - Transformers 4.15.0.dev0
60
  - Pytorch 1.10.0+cu102
61
  - Datasets 1.15.1
62
- - Tokenizers 0.10.3
 
1
  ---
 
 
 
2
  tags:
3
+ - summarization
4
+ language:
5
+ - it
6
  metrics:
7
  - rouge
8
  model-index:
9
  - name: summarization_mbart_fanpage4epoch
10
  results: []
11
+ datasets:
12
+ - ARTeLab/fanpage
13
  ---
14
 
15
+ # mbart-summarization-fanpage
 
16
 
17
+ This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on Fanpage dataset for Abstractive Summarization.
18
 
19
+ It achieves the following results:
 
20
  - Loss: 2.1833
21
  - Rouge1: 36.5027
22
  - Rouge2: 17.4428
 
24
  - Rougelsum: 30.2636
25
  - Gen Len: 75.2413
26
 
27
+ ## Usage
 
 
 
 
 
 
28
 
29
+ ```python
30
+ from transformers import MBartTokenizer, MBartForConditionalGeneration
31
+ tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-fanpage")
32
+ model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-fanpage")
33
+ ```
34
 
35
  ### Training hyperparameters
36
 
 
43
  - lr_scheduler_type: linear
44
  - num_epochs: 4.0
45
 
 
 
 
 
46
  ### Framework versions
47
 
48
  - Transformers 4.15.0.dev0
49
  - Pytorch 1.10.0+cu102
50
  - Datasets 1.15.1
51
+ - Tokenizers 0.10.3