--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: clinical-t5 results: [] datasets: - AGBonnet/augmented-clinical-notes language: - en metrics: - rouge pipeline_tag: summarization --- # clinical-t5 This is a finetuned T5-small model from Google, a checkpoint with 60 million parameters, for clinical note summarization. It was finetuned with the [augmented-clinical-notes](https://huggingface.co/datasets/AGBonnet/augmented-clinical-notes) dataset, available in the Hugging Face. ## Intended uses & limitations The model was created for learning purposes. Hence, although being briefly evaluated in [this](https://github.com/hossboll/clinical_nlp/blob/main/clinical_t5_finetuned.ipynb ) notebook, it should be further refined. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.13.3