tiedeman commited on
Commit
559390a
1 Parent(s): b2918dd

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - ace
5
+ - akl
6
+ - ban
7
+ - bcl
8
+ - bik
9
+ - btd
10
+ - bth
11
+ - bto
12
+ - bts
13
+ - btx
14
+ - bug
15
+ - ceb
16
+ - cgc
17
+ - ch
18
+ - dtp
19
+ - en
20
+ - fil
21
+ - gor
22
+ - hil
23
+ - iba
24
+ - id
25
+ - ify
26
+ - ilo
27
+ - jv
28
+ - krj
29
+ - ljp
30
+ - mad
31
+ - mak
32
+ - mbb
33
+ - mbt
34
+ - mg
35
+ - mhy
36
+ - mog
37
+ - mrw
38
+ - ms
39
+ - msm
40
+ - mta
41
+ - mwv
42
+ - nia
43
+ - nij
44
+ - obo
45
+ - pag
46
+ - pam
47
+ - pau
48
+ - rej
49
+ - sas
50
+ - sda
51
+ - sml
52
+ - su
53
+ - sxn
54
+ - tbl
55
+ - war
56
+
57
+ tags:
58
+ - translation
59
+ - opus-mt-tc-bible
60
+
61
+ license: apache-2.0
62
+ model-index:
63
+ - name: opus-mt-tc-bible-big-pqw-en
64
+ results:
65
+ - task:
66
+ name: Translation multi-eng
67
+ type: translation
68
+ args: multi-eng
69
+ dataset:
70
+ name: tatoeba-test-v2020-07-28-v2023-09-26
71
+ type: tatoeba_mt
72
+ args: multi-eng
73
+ metrics:
74
+ - name: BLEU
75
+ type: bleu
76
+ value: 27.5
77
+ - name: chr-F
78
+ type: chrf
79
+ value: 0.46571
80
+ ---
81
+ # opus-mt-tc-bible-big-pqw-en
82
+
83
+ ## Table of Contents
84
+ - [Model Details](#model-details)
85
+ - [Uses](#uses)
86
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
87
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
88
+ - [Training](#training)
89
+ - [Evaluation](#evaluation)
90
+ - [Citation Information](#citation-information)
91
+ - [Acknowledgements](#acknowledgements)
92
+
93
+ ## Model Details
94
+
95
+ Neural machine translation model for translating from Western Malayo-Polynesian languages (pqw) to English (en).
96
+
97
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
98
+ **Model Description:**
99
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
100
+ - **Model Type:** Translation (transformer-big)
101
+ - **Release**: 2024-08-17
102
+ - **License:** Apache-2.0
103
+ - **Language(s):**
104
+ - Source Language(s): ace akl ban bcl bik btd bth bto bts btx bug ceb cgc cha dtp fil gor hil iba ify ilo ind jak jav krj ljp mad mak max mbb mbt mhy mlg mog mrw msa msm mta mwv nia nij obo pag pam pau plt rej sas sda sml sun sxn tbl tmw war zlm zsm
105
+ - Target Language(s): eng
106
+ - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pqw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
107
+ - **Resources for more information:**
108
+ - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/pqw-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
109
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
110
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
111
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
112
+ - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
113
+ - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
114
+
115
+ ## Uses
116
+
117
+ This model can be used for translation and text-to-text generation.
118
+
119
+ ## Risks, Limitations and Biases
120
+
121
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
122
+
123
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
124
+
125
+ ## How to Get Started With the Model
126
+
127
+ A short example code:
128
+
129
+ ```python
130
+ from transformers import MarianMTModel, MarianTokenizer
131
+
132
+ src_text = [
133
+ "Momolukis i tanak di Mary do gambar disido.",
134
+ "Tom sedang membaca buku."
135
+ ]
136
+
137
+ model_name = "pytorch-models/opus-mt-tc-bible-big-pqw-en"
138
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
139
+ model = MarianMTModel.from_pretrained(model_name)
140
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
141
+
142
+ for t in translated:
143
+ print( tokenizer.decode(t, skip_special_tokens=True) )
144
+
145
+ # expected output:
146
+ # Momopakis i takan in Mary do pictures disido.
147
+ # Tom is reading a book.
148
+ ```
149
+
150
+ You can also use OPUS-MT models with the transformers pipelines, for example:
151
+
152
+ ```python
153
+ from transformers import pipeline
154
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-pqw-en")
155
+ print(pipe("Momolukis i tanak di Mary do gambar disido."))
156
+
157
+ # expected output: Momopakis i takan in Mary do pictures disido.
158
+ ```
159
+
160
+ ## Training
161
+
162
+ - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
163
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
164
+ - **Model Type:** transformer-big
165
+ - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pqw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
166
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
167
+
168
+ ## Evaluation
169
+
170
+ * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/pqw-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
171
+ * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
172
+ * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
173
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
174
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
175
+
176
+ | langpair | testset | chr-F | BLEU | #sent | #words |
177
+ |----------|---------|-------|-------|-------|--------|
178
+ | multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.46571 | 27.5 | 10000 | 71187 |
179
+
180
+ ## Citation Information
181
+
182
+ * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
183
+
184
+ ```bibtex
185
+ @article{tiedemann2023democratizing,
186
+ title={Democratizing neural machine translation with {OPUS-MT}},
187
+ author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
188
+ journal={Language Resources and Evaluation},
189
+ number={58},
190
+ pages={713--755},
191
+ year={2023},
192
+ publisher={Springer Nature},
193
+ issn={1574-0218},
194
+ doi={10.1007/s10579-023-09704-w}
195
+ }
196
+
197
+ @inproceedings{tiedemann-thottingal-2020-opus,
198
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
199
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
200
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
201
+ month = nov,
202
+ year = "2020",
203
+ address = "Lisboa, Portugal",
204
+ publisher = "European Association for Machine Translation",
205
+ url = "https://aclanthology.org/2020.eamt-1.61",
206
+ pages = "479--480",
207
+ }
208
+
209
+ @inproceedings{tiedemann-2020-tatoeba,
210
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
211
+ author = {Tiedemann, J{\"o}rg},
212
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
213
+ month = nov,
214
+ year = "2020",
215
+ address = "Online",
216
+ publisher = "Association for Computational Linguistics",
217
+ url = "https://aclanthology.org/2020.wmt-1.139",
218
+ pages = "1174--1182",
219
+ }
220
+ ```
221
+
222
+ ## Acknowledgements
223
+
224
+ The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
225
+
226
+ ## Model conversion info
227
+
228
+ * transformers version: 4.45.1
229
+ * OPUS-MT git hash: 0882077
230
+ * port time: Tue Oct 8 13:52:53 EEST 2024
231
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ multi-eng tatoeba-test-v2020-07-28-v2023-09-26 0.46571 27.5 10000 71187
benchmark_translations.zip ADDED
File without changes
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pytorch-models/opus-mt-tc-bible-big-pqw-en",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "architectures": [
6
+ "MarianMTModel"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 0,
10
+ "classifier_dropout": 0.0,
11
+ "d_model": 1024,
12
+ "decoder_attention_heads": 16,
13
+ "decoder_ffn_dim": 4096,
14
+ "decoder_layerdrop": 0.0,
15
+ "decoder_layers": 6,
16
+ "decoder_start_token_id": 58310,
17
+ "decoder_vocab_size": 58311,
18
+ "dropout": 0.1,
19
+ "encoder_attention_heads": 16,
20
+ "encoder_ffn_dim": 4096,
21
+ "encoder_layerdrop": 0.0,
22
+ "encoder_layers": 6,
23
+ "eos_token_id": 855,
24
+ "forced_eos_token_id": null,
25
+ "init_std": 0.02,
26
+ "is_encoder_decoder": true,
27
+ "max_length": null,
28
+ "max_position_embeddings": 1024,
29
+ "model_type": "marian",
30
+ "normalize_embedding": false,
31
+ "num_beams": null,
32
+ "num_hidden_layers": 6,
33
+ "pad_token_id": 58310,
34
+ "scale_embedding": true,
35
+ "share_encoder_decoder_embeddings": true,
36
+ "static_position_embeddings": true,
37
+ "torch_dtype": "float32",
38
+ "transformers_version": "4.45.1",
39
+ "use_cache": true,
40
+ "vocab_size": 58311
41
+ }
generation_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bad_words_ids": [
4
+ [
5
+ 58310
6
+ ]
7
+ ],
8
+ "bos_token_id": 0,
9
+ "decoder_start_token_id": 58310,
10
+ "eos_token_id": 855,
11
+ "forced_eos_token_id": 855,
12
+ "max_length": 512,
13
+ "num_beams": 4,
14
+ "pad_token_id": 58310,
15
+ "transformers_version": "4.45.1"
16
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:005957da9b1177e60ca0389d6fd15728c1ce7f74997d8a2a034e3a391c7f75ed
3
+ size 944534220
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:796b2a4a5c22994baa2aff6d786cd2d2d48ec4506dc52ebb33fcf749c33028dd
3
+ size 944585477
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16f94d12e5282f2be1e8869c0db556d2504d3208a7e3c49608479c69484fe24d
3
+ size 783301
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e136873213e4e0cf8d10511d829b34178c391dfe3a7ae56f5451408c5770df9
3
+ size 798685
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "pqw", "target_lang": "en", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17/pqw-en", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff