pszemraj commited on
Commit
de92e8b
1 Parent(s): 017f950

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -16,6 +16,7 @@ model-index:
16
  # tglobal-large-booksum-WIP
17
 
18
  > this is a WIP checkpoint that has been fine-tuned from the vanilla (original) for 10ish epochs. It is **not ready to be used for inference**
 
19
  This model is a fine-tuned version of [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) on the `kmfoda/booksum` dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 4.9519
@@ -36,6 +37,7 @@ this is a WIP checkpoint that has been fine-tuned from the vanilla (original) fo
36
  ## Training and evaluation data
37
 
38
  This is **only** fine-tuned on booksum (vs. previous large WIP checkpoint I made that started from a partially-trained `pubmed` checkpoint)
 
39
  ## Training procedure
40
 
41
  ### Training hyperparameters
 
16
  # tglobal-large-booksum-WIP
17
 
18
  > this is a WIP checkpoint that has been fine-tuned from the vanilla (original) for 10ish epochs. It is **not ready to be used for inference**
19
+
20
  This model is a fine-tuned version of [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) on the `kmfoda/booksum` dataset.
21
  It achieves the following results on the evaluation set:
22
  - Loss: 4.9519
 
37
  ## Training and evaluation data
38
 
39
  This is **only** fine-tuned on booksum (vs. previous large WIP checkpoint I made that started from a partially-trained `pubmed` checkpoint)
40
+
41
  ## Training procedure
42
 
43
  ### Training hyperparameters