File size: 2,823 Bytes
ab22db8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- recall
- accuracy
model-index:
- name: multibert_1210seed25
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# multibert_1210seed25

This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4453
- Precisions: 0.8647
- Recall: 0.8314
- F-measure: 0.8459
- Accuracy: 0.9141

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:|
| 0.6013        | 1.0   | 236  | 0.4080          | 0.8974     | 0.6857 | 0.7273    | 0.8736   |
| 0.319         | 2.0   | 472  | 0.3621          | 0.8338     | 0.7306 | 0.7317    | 0.8875   |
| 0.1929        | 3.0   | 708  | 0.3823          | 0.8020     | 0.7680 | 0.7761    | 0.9022   |
| 0.1389        | 4.0   | 944  | 0.4353          | 0.8400     | 0.7742 | 0.7990    | 0.9003   |
| 0.0958        | 5.0   | 1180 | 0.4348          | 0.8726     | 0.7547 | 0.7971    | 0.9011   |
| 0.0676        | 6.0   | 1416 | 0.4453          | 0.8647     | 0.8314 | 0.8459    | 0.9141   |
| 0.0506        | 7.0   | 1652 | 0.5222          | 0.8555     | 0.8013 | 0.8253    | 0.9100   |
| 0.0315        | 8.0   | 1888 | 0.5192          | 0.8700     | 0.7873 | 0.8187    | 0.9108   |
| 0.0229        | 9.0   | 2124 | 0.5977          | 0.8402     | 0.7839 | 0.8079    | 0.9062   |
| 0.0149        | 10.0  | 2360 | 0.6061          | 0.8622     | 0.8069 | 0.8305    | 0.9131   |
| 0.0122        | 11.0  | 2596 | 0.5894          | 0.8419     | 0.7702 | 0.7983    | 0.9085   |
| 0.0065        | 12.0  | 2832 | 0.6120          | 0.8514     | 0.7700 | 0.8021    | 0.9089   |
| 0.0039        | 13.0  | 3068 | 0.6434          | 0.8437     | 0.7646 | 0.7965    | 0.9055   |
| 0.003         | 14.0  | 3304 | 0.6391          | 0.8403     | 0.7670 | 0.7973    | 0.9062   |


### Framework versions

- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1