--- license: cc-by-sa-4.0 base_model: nlpaueb/legal-bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: LegalBert_NoDuplicates_20Partition_5000WordsFrequency results: [] --- # LegalBert_NoDuplicates_20Partition_5000WordsFrequency This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4410 - Accuracy: 0.9049 - F1 Macro: 0.8387 - F1 Class 0: 0.9350 - F1 Class 1: 0.6 - F1 Class 2: 0.9290 - F1 Class 3: 0.8000 - F1 Class 4: 0.9014 - F1 Class 5: 0.9388 - F1 Class 6: 0.8119 - F1 Class 7: 0.9317 - F1 Class 8: 0.9804 - F1 Class 9: 0.8595 - F1 Class 10: 0.8834 - F1 Class 11: 0.6087 - F1 Class 12: 0.8280 - F1 Class 13: 0.8333 - F1 Class 14: 0.8808 - F1 Class 15: 0.5588 - F1 Class 16: 0.7273 - F1 Class 17: 0.9799 - F1 Class 18: 0.8440 - F1 Class 19: 0.9412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 | F1 Class 6 | F1 Class 7 | F1 Class 8 | F1 Class 9 | F1 Class 10 | F1 Class 11 | F1 Class 12 | F1 Class 13 | F1 Class 14 | F1 Class 15 | F1 Class 16 | F1 Class 17 | F1 Class 18 | F1 Class 19 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:| | 1.4309 | 0.44 | 250 | 0.9084 | 0.7836 | 0.5020 | 0.8895 | 0.0 | 0.8333 | 0.6111 | 0.5205 | 0.6966 | 0.4928 | 0.7551 | 0.8496 | 0.7397 | 0.8260 | 0.0 | 0.6420 | 0.0 | 0.5325 | 0.0 | 0.0 | 0.9700 | 0.6813 | 0.0 | | 0.6988 | 0.88 | 500 | 0.5860 | 0.8602 | 0.6434 | 0.9078 | 0.0 | 0.8871 | 0.6829 | 0.7907 | 0.7835 | 0.7579 | 0.9091 | 0.9293 | 0.8421 | 0.8542 | 0.0 | 0.7524 | 0.3448 | 0.8243 | 0.0 | 0.0 | 0.9736 | 0.8000 | 0.8276 | | 0.4866 | 1.33 | 750 | 0.5249 | 0.8765 | 0.7060 | 0.9189 | 0.0 | 0.8866 | 0.8511 | 0.8095 | 0.8269 | 0.8387 | 0.9375 | 0.9412 | 0.7883 | 0.8801 | 0.1538 | 0.7870 | 0.8085 | 0.8203 | 0.0976 | 0.0 | 0.9690 | 0.8348 | 0.9697 | | 0.4198 | 1.77 | 1000 | 0.4760 | 0.8796 | 0.7172 | 0.9177 | 0.0 | 0.9137 | 0.8000 | 0.8406 | 0.7957 | 0.7957 | 0.9308 | 0.9524 | 0.7612 | 0.8718 | 0.2857 | 0.8012 | 0.8085 | 0.8344 | 0.2800 | 0.0 | 0.9799 | 0.8333 | 0.9412 | | 0.3275 | 2.21 | 1250 | 0.4650 | 0.8867 | 0.7405 | 0.9221 | 0.0 | 0.9201 | 0.8000 | 0.8608 | 0.8421 | 0.8043 | 0.9367 | 0.9259 | 0.8372 | 0.8731 | 0.3478 | 0.8121 | 0.8085 | 0.8407 | 0.5312 | 0.0 | 0.9737 | 0.8333 | 0.9412 | | 0.2874 | 2.65 | 1500 | 0.4662 | 0.8916 | 0.7792 | 0.9221 | 0.6 | 0.9160 | 0.8000 | 0.7879 | 0.8224 | 0.7959 | 0.9325 | 0.9804 | 0.8430 | 0.8896 | 0.5 | 0.7862 | 0.8085 | 0.8693 | 0.5634 | 0.0 | 0.9829 | 0.8421 | 0.9412 | | 0.2563 | 3.1 | 1750 | 0.4427 | 0.8978 | 0.7627 | 0.9310 | 0.25 | 0.9272 | 0.8000 | 0.88 | 0.8515 | 0.8602 | 0.9383 | 0.9615 | 0.8438 | 0.8896 | 0.3333 | 0.8182 | 0.8085 | 0.8542 | 0.5574 | 0.0 | 0.9784 | 0.8302 | 0.9412 | | 0.2206 | 3.54 | 2000 | 0.4378 | 0.8996 | 0.7920 | 0.9298 | 0.6 | 0.9251 | 0.8000 | 0.9067 | 0.8468 | 0.8200 | 0.9317 | 0.9804 | 0.8413 | 0.8913 | 0.5217 | 0.8208 | 0.8333 | 0.8571 | 0.5574 | 0.0 | 0.9829 | 0.8519 | 0.9412 | | 0.1966 | 3.98 | 2250 | 0.4262 | 0.9031 | 0.8378 | 0.9361 | 0.6 | 0.9247 | 0.8000 | 0.9067 | 0.9143 | 0.8235 | 0.9317 | 0.9709 | 0.8739 | 0.8820 | 0.5714 | 0.8239 | 0.8571 | 0.8658 | 0.5846 | 0.7273 | 0.9784 | 0.8421 | 0.9412 | | 0.1565 | 4.42 | 2500 | 0.4355 | 0.9075 | 0.8394 | 0.9390 | 0.6 | 0.9307 | 0.8000 | 0.9315 | 0.9184 | 0.8235 | 0.9317 | 0.9703 | 0.8889 | 0.8872 | 0.5833 | 0.8317 | 0.8085 | 0.8725 | 0.5758 | 0.7273 | 0.9829 | 0.8440 | 0.9412 | | 0.149 | 4.87 | 2750 | 0.4410 | 0.9049 | 0.8387 | 0.9350 | 0.6 | 0.9290 | 0.8000 | 0.9014 | 0.9388 | 0.8119 | 0.9317 | 0.9804 | 0.8595 | 0.8834 | 0.6087 | 0.8280 | 0.8333 | 0.8808 | 0.5588 | 0.7273 | 0.9799 | 0.8440 | 0.9412 | ### Framework versions - Transformers 4.32.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3