wzkariampuzha commited on
Commit
58c7ed9
1 Parent(s): 064c51a

Upload 11 files

Browse files
README.md CHANGED
@@ -1,5 +1,83 @@
1
- ---
2
- license: other
3
- ---
4
-
5
- ## Model Documentation in progress
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - epi_classify4_gard
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ model-index:
12
+ - name: results
13
+ results:
14
+ - task:
15
+ name: Text Classification
16
+ type: text-classification
17
+ dataset:
18
+ name: epi_classify4_gard
19
+ type: epi_classify4_gard
20
+ args: default
21
+ metrics:
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.875
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.9032258064516129
28
+ - name: F1
29
+ type: f1
30
+ value: 0.8888888888888888
31
+ - name: Accuracy
32
+ type: accuracy
33
+ value: 0.986
34
+ ---
35
+
36
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
37
+ should probably proofread and complete it, then remove this comment. -->
38
+
39
+ # results
40
+
41
+ This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the epi_classify4_gard dataset.
42
+ It achieves the following results on the evaluation set:
43
+ - Loss: 0.0541
44
+ - Precision: 0.875
45
+ - Recall: 0.9032
46
+ - F1: 0.8889
47
+ - Accuracy: 0.986
48
+
49
+ ## Model description
50
+
51
+ More information needed
52
+
53
+ ## Intended uses & limitations
54
+
55
+ More information needed
56
+
57
+ ## Training and evaluation data
58
+
59
+ More information needed
60
+
61
+ ## Training procedure
62
+
63
+ ### Training hyperparameters
64
+
65
+ The following hyperparameters were used during training:
66
+ - learning_rate: 3e-05
67
+ - train_batch_size: 16
68
+ - eval_batch_size: 8
69
+ - seed: 2
70
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
71
+ - lr_scheduler_type: linear
72
+ - num_epochs: 4.0
73
+
74
+ ### Training results
75
+
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.12.5
81
+ - Pytorch 1.9.0+cu102
82
+ - Datasets 1.12.1
83
+ - Tokenizers 0.10.3
all_results.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.0,
3
+ "eval_accuracy": 0.986,
4
+ "eval_f1": 0.8888888888888888,
5
+ "eval_loss": 0.05408545956015587,
6
+ "eval_precision": 0.875,
7
+ "eval_recall": 0.9032258064516129,
8
+ "eval_runtime": 5.1836,
9
+ "eval_samples": 500,
10
+ "eval_samples_per_second": 96.457,
11
+ "eval_steps_per_second": 12.154,
12
+ "train_loss": 0.06538131501939562,
13
+ "train_runtime": 127.4981,
14
+ "train_samples": 1000,
15
+ "train_samples_per_second": 31.373,
16
+ "train_steps_per_second": 1.976
17
+ }
eval_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.0,
3
+ "eval_accuracy": 0.986,
4
+ "eval_f1": 0.8888888888888888,
5
+ "eval_loss": 0.05408545956015587,
6
+ "eval_precision": 0.875,
7
+ "eval_recall": 0.9032258064516129,
8
+ "eval_runtime": 5.1836,
9
+ "eval_samples": 500,
10
+ "eval_samples_per_second": 96.457,
11
+ "eval_steps_per_second": 12.154
12
+ }
eval_results.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ eval_loss = 0.05408545956015587
2
+ eval_precision = 0.875
3
+ eval_recall = 0.9032258064516129
4
+ eval_f1 = 0.8888888888888888
5
+ eval_accuracy = 0.986
6
+ eval_runtime = 5.1836
7
+ eval_samples_per_second = 96.457
8
+ eval_steps_per_second = 12.154
9
+ epoch = 4.0
10
+ eval_samples = 500
predict_results.txt ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ precision = 1.0
2
+ recall = 0.6
3
+ f1 = 0.7499999999999999
4
+ accuracy = 0.8163265306122449
5
+
6
+ index prediction
7
+ 0 0
8
+ 1 0
9
+ 2 0
10
+ 3 1
11
+ 4 0
12
+ 5 0
13
+ 6 0
14
+ 7 1
15
+ 8 1
16
+ 9 0
17
+ 10 0
18
+ 11 1
19
+ 12 0
20
+ 13 0
21
+ 14 1
22
+ 15 0
23
+ 16 0
24
+ 17 0
25
+ 18 0
26
+ 19 0
27
+ 20 0
28
+ 21 0
29
+ 22 0
30
+ 23 1
31
+ 24 0
32
+ 25 0
33
+ 26 0
34
+ 27 1
35
+ 28 0
36
+ 29 0
37
+ 30 1
38
+ 31 0
39
+ 32 0
40
+ 33 0
41
+ 34 0
42
+ 35 0
43
+ 36 0
44
+ 37 1
45
+ 38 0
46
+ 39 1
47
+ 40 1
48
+ 41 0
49
+ 42 0
50
+ 43 0
51
+ 44 0
52
+ 45 1
53
+ 46 0
54
+ 47 0
55
+ 48 0
56
+ 49 0
57
+ 50 0
58
+ 51 0
59
+ 52 0
60
+ 53 0
61
+ 54 0
62
+ 55 0
63
+ 56 0
64
+ 57 1
65
+ 58 0
66
+ 59 1
67
+ 60 1
68
+ 61 1
69
+ 62 0
70
+ 63 0
71
+ 64 0
72
+ 65 0
73
+ 66 0
74
+ 67 1
75
+ 68 1
76
+ 69 0
77
+ 70 1
78
+ 71 0
79
+ 72 0
80
+ 73 1
81
+ 74 0
82
+ 75 0
83
+ 76 0
84
+ 77 0
85
+ 78 0
86
+ 79 1
87
+ 80 1
88
+ 81 0
89
+ 82 0
90
+ 83 0
91
+ 84 1
92
+ 85 0
93
+ 86 0
94
+ 87 0
95
+ 88 1
96
+ 89 0
97
+ 90 0
98
+ 91 0
99
+ 92 0
100
+ 93 0
101
+ 94 0
102
+ 95 1
103
+ 96 1
104
+ 97 1
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": null, "name_or_path": "dmis-lab/biobert-base-cased-v1.2", "do_basic_tokenize": true, "never_split": null, "tokenizer_class": "BertTokenizer"}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 4.0,
3
+ "train_loss": 0.06538131501939562,
4
+ "train_runtime": 127.4981,
5
+ "train_samples": 1000,
6
+ "train_samples_per_second": 31.373,
7
+ "train_steps_per_second": 1.976
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 4.0,
5
+ "global_step": 252,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 4.0,
12
+ "step": 252,
13
+ "total_flos": 1014634340562720.0,
14
+ "train_loss": 0.06538131501939562,
15
+ "train_runtime": 127.4981,
16
+ "train_samples_per_second": 31.373,
17
+ "train_steps_per_second": 1.976
18
+ }
19
+ ],
20
+ "max_steps": 252,
21
+ "num_train_epochs": 4,
22
+ "total_flos": 1014634340562720.0,
23
+ "trial_name": null,
24
+ "trial_params": null
25
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff