Edit model card

fine-tuned-roberta-nosql-injection

This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 75

Training results

Training Loss Epoch Step Validation Loss
1.2572 1.0 158 0.2235
0.1175 2.0 316 0.0325
0.0454 3.0 474 0.1079
0.05 4.0 632 0.0212
0.0677 5.0 790 0.0713
0.0821 6.0 948 0.0007
0.0259 7.0 1106 0.0277
0.0422 8.0 1264 0.0068
0.0282 9.0 1422 0.0492
0.0273 10.0 1580 0.0008
0.0272 11.0 1738 0.0256
0.0859 12.0 1896 0.0000
0.0271 13.0 2054 0.0001
0.0058 14.0 2212 0.0583
0.0121 15.0 2370 0.0257
0.0189 16.0 2528 0.0631
0.0275 17.0 2686 0.0186
0.006 18.0 2844 0.0027
0.025 19.0 3002 0.0349
0.0377 20.0 3160 0.0004
0.0108 21.0 3318 0.0091
0.0233 22.0 3476 0.0772
0.0216 23.0 3634 0.0000
0.0255 24.0 3792 0.0607
0.0211 25.0 3950 0.0251
0.037 26.0 4108 0.0223
0.0057 27.0 4266 0.0375
0.0464 28.0 4424 0.0659
0.0446 29.0 4582 0.0235
0.0453 30.0 4740 0.0278
0.0033 31.0 4898 0.0417
0.0104 32.0 5056 0.0544
0.0084 33.0 5214 0.0000
0.0004 34.0 5372 0.0247
0.0185 35.0 5530 0.0002
0.0165 36.0 5688 0.0000
0.0381 37.0 5846 0.0000
0.0281 38.0 6004 0.0000
0.006 39.0 6162 0.0085
0.0083 40.0 6320 0.0000
0.0101 41.0 6478 0.0006
0.0282 42.0 6636 0.0003
0.0202 43.0 6794 0.0205
0.0053 44.0 6952 0.0275
0.0293 45.0 7110 0.0485
0.0119 46.0 7268 0.0000
0.0045 47.0 7426 0.0000
0.0066 48.0 7584 0.0268
0.0191 49.0 7742 0.0103
0.0007 50.0 7900 0.0386
0.0072 51.0 8058 0.0000
0.0031 52.0 8216 0.0000
0.0037 53.0 8374 0.0225
0.0135 54.0 8532 0.0003
0.0015 55.0 8690 0.0002
0.0066 56.0 8848 0.0025
0.0281 57.0 9006 0.0145
0.012 58.0 9164 0.0000
0.0065 59.0 9322 0.0000
0.0054 60.0 9480 0.0082
0.0104 61.0 9638 0.0000
0.0005 62.0 9796 0.0303
0.005 63.0 9954 0.0000
0.0092 64.0 10112 0.0412
0.0055 65.0 10270 0.0191
0.0092 66.0 10428 0.0158
0.0065 67.0 10586 0.0087
0.0004 68.0 10744 0.0000
0.0068 69.0 10902 0.0044
0.0043 70.0 11060 0.0022
0.0055 71.0 11218 0.0009
0.0063 72.0 11376 0.0000
0.0022 73.0 11534 0.0006
0.0116 74.0 11692 0.0014
0.0043 75.0 11850 0.0000

Framework versions

  • Transformers 4.31.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.11.0
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ankush-003/fine-tuned-roberta-nosql-injection

Finetuned
(1300)
this model