Badr Abdullah commited on
Commit
d0bc96f
1 Parent(s): dd3058f

Upload tokenizer

Browse files
Files changed (5) hide show
  1. README.md +199 -0
  2. added_tokens.json +4 -0
  3. special_tokens_map.json +6 -0
  4. tokenizer_config.json +47 -0
  5. vocab.json +232 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "</s>": 231,
3
+ "<s>": 230
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "[UNK]"
6
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "228": {
4
+ "content": "[UNK]",
5
+ "lstrip": true,
6
+ "normalized": false,
7
+ "rstrip": true,
8
+ "single_word": false,
9
+ "special": false
10
+ },
11
+ "229": {
12
+ "content": "[PAD]",
13
+ "lstrip": true,
14
+ "normalized": false,
15
+ "rstrip": true,
16
+ "single_word": false,
17
+ "special": false
18
+ },
19
+ "230": {
20
+ "content": "<s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "231": {
28
+ "content": "</s>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ }
35
+ },
36
+ "bos_token": "<s>",
37
+ "clean_up_tokenization_spaces": true,
38
+ "do_lower_case": false,
39
+ "eos_token": "</s>",
40
+ "model_max_length": 1000000000000000019884624838656,
41
+ "pad_token": "[PAD]",
42
+ "replace_word_delimiter_char": " ",
43
+ "target_lang": null,
44
+ "tokenizer_class": "Wav2Vec2CTCTokenizer",
45
+ "unk_token": "[UNK]",
46
+ "word_delimiter_token": "|"
47
+ }
vocab.json ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "[PAD]": 229,
3
+ "[UNK]": 228,
4
+ "|": 0,
5
+ "ሀ": 1,
6
+ "ሁ": 2,
7
+ "ሂ": 3,
8
+ "ሃ": 4,
9
+ "ሄ": 5,
10
+ "ህ": 6,
11
+ "ሆ": 7,
12
+ "ለ": 8,
13
+ "ሉ": 9,
14
+ "ሊ": 10,
15
+ "ላ": 11,
16
+ "ሌ": 12,
17
+ "ል": 13,
18
+ "ሎ": 14,
19
+ "ሏ": 15,
20
+ "ሐ": 16,
21
+ "ሑ": 17,
22
+ "ሓ": 18,
23
+ "ሔ": 19,
24
+ "ሕ": 20,
25
+ "መ": 21,
26
+ "ሙ": 22,
27
+ "ሚ": 23,
28
+ "ማ": 24,
29
+ "ሜ": 25,
30
+ "ም": 26,
31
+ "ሞ": 27,
32
+ "ሟ": 28,
33
+ "ሠ": 29,
34
+ "ሡ": 30,
35
+ "ሣ": 31,
36
+ "ሥ": 32,
37
+ "ሦ": 33,
38
+ "ረ": 34,
39
+ "ሩ": 35,
40
+ "ሪ": 36,
41
+ "ራ": 37,
42
+ "ሬ": 38,
43
+ "ር": 39,
44
+ "ሮ": 40,
45
+ "ሯ": 41,
46
+ "ሰ": 42,
47
+ "ሱ": 43,
48
+ "ሲ": 44,
49
+ "ሳ": 45,
50
+ "ሴ": 46,
51
+ "ስ": 47,
52
+ "ሶ": 48,
53
+ "ሷ": 49,
54
+ "ሸ": 50,
55
+ "ሹ": 51,
56
+ "ሺ": 52,
57
+ "ሻ": 53,
58
+ "ሼ": 54,
59
+ "ሽ": 55,
60
+ "ሾ": 56,
61
+ "ቀ": 57,
62
+ "ቁ": 58,
63
+ "ቂ": 59,
64
+ "ቃ": 60,
65
+ "ቄ": 61,
66
+ "ቅ": 62,
67
+ "ቆ": 63,
68
+ "ቋ": 64,
69
+ "በ": 65,
70
+ "ቡ": 66,
71
+ "ቢ": 67,
72
+ "ባ": 68,
73
+ "ቤ": 69,
74
+ "ብ": 70,
75
+ "ቦ": 71,
76
+ "ቧ": 72,
77
+ "ቨ": 73,
78
+ "ቪ": 74,
79
+ "ቫ": 75,
80
+ "ቭ": 76,
81
+ "ቮ": 77,
82
+ "ተ": 78,
83
+ "ቱ": 79,
84
+ "ቲ": 80,
85
+ "ታ": 81,
86
+ "ቴ": 82,
87
+ "ት": 83,
88
+ "ቶ": 84,
89
+ "ቷ": 85,
90
+ "ቸ": 86,
91
+ "ቹ": 87,
92
+ "ቺ": 88,
93
+ "ቻ": 89,
94
+ "ቼ": 90,
95
+ "ች": 91,
96
+ "ቾ": 92,
97
+ "ቿ": 93,
98
+ "ኃ": 94,
99
+ "ኅ": 95,
100
+ "ኋ": 96,
101
+ "ነ": 97,
102
+ "ኑ": 98,
103
+ "ኒ": 99,
104
+ "ና": 100,
105
+ "ኔ": 101,
106
+ "ን": 102,
107
+ "ኖ": 103,
108
+ "ኗ": 104,
109
+ "ኘ": 105,
110
+ "ኙ": 106,
111
+ "ኛ": 107,
112
+ "ኝ": 108,
113
+ "ኞ": 109,
114
+ "ኟ": 110,
115
+ "አ": 111,
116
+ "ኡ": 112,
117
+ "ኢ": 113,
118
+ "ኣ": 114,
119
+ "ኤ": 115,
120
+ "እ": 116,
121
+ "ኦ": 117,
122
+ "ከ": 118,
123
+ "ኩ": 119,
124
+ "ኪ": 120,
125
+ "ካ": 121,
126
+ "ኬ": 122,
127
+ "ክ": 123,
128
+ "ኮ": 124,
129
+ "ኳ": 125,
130
+ "ኸ": 126,
131
+ "ኽ": 127,
132
+ "ወ": 128,
133
+ "ዊ": 129,
134
+ "ዋ": 130,
135
+ "ዌ": 131,
136
+ "ው": 132,
137
+ "ዎ": 133,
138
+ "ዐ": 134,
139
+ "ዑ": 135,
140
+ "ዒ": 136,
141
+ "ዓ": 137,
142
+ "ዕ": 138,
143
+ "ዖ": 139,
144
+ "ዘ": 140,
145
+ "ዙ": 141,
146
+ "ዚ": 142,
147
+ "ዛ": 143,
148
+ "ዜ": 144,
149
+ "ዝ": 145,
150
+ "ዞ": 146,
151
+ "ዢ": 147,
152
+ "ዣ": 148,
153
+ "ዤ": 149,
154
+ "ዥ": 150,
155
+ "ዦ": 151,
156
+ "የ": 152,
157
+ "ዩ": 153,
158
+ "ያ": 154,
159
+ "ዬ": 155,
160
+ "ይ": 156,
161
+ "ዮ": 157,
162
+ "ደ": 158,
163
+ "ዱ": 159,
164
+ "ዲ": 160,
165
+ "ዳ": 161,
166
+ "ዴ": 162,
167
+ "ድ": 163,
168
+ "ዶ": 164,
169
+ "ዷ": 165,
170
+ "ጀ": 166,
171
+ "ጁ": 167,
172
+ "ጂ": 168,
173
+ "ጃ": 169,
174
+ "ጄ": 170,
175
+ "ጅ": 171,
176
+ "ጆ": 172,
177
+ "ገ": 173,
178
+ "ጉ": 174,
179
+ "ጊ": 175,
180
+ "ጋ": 176,
181
+ "ጌ": 177,
182
+ "ግ": 178,
183
+ "ጎ": 179,
184
+ "ጓ": 180,
185
+ "ጠ": 181,
186
+ "ጡ": 182,
187
+ "ጢ": 183,
188
+ "ጣ": 184,
189
+ "ጤ": 185,
190
+ "ጥ": 186,
191
+ "ጦ": 187,
192
+ "ጧ": 188,
193
+ "ጨ": 189,
194
+ "ጩ": 190,
195
+ "ጪ": 191,
196
+ "ጫ": 192,
197
+ "ጬ": 193,
198
+ "ጭ": 194,
199
+ "ጮ": 195,
200
+ "ጲ": 196,
201
+ "ጴ": 197,
202
+ "ጵ": 198,
203
+ "ጶ": 199,
204
+ "ጸ": 200,
205
+ "ጹ": 201,
206
+ "ጺ": 202,
207
+ "ጻ": 203,
208
+ "ጽ": 204,
209
+ "ጾ": 205,
210
+ "ጿ": 206,
211
+ "ፀ": 207,
212
+ "ፁ": 208,
213
+ "ፃ": 209,
214
+ "ፅ": 210,
215
+ "ፈ": 211,
216
+ "ፉ": 212,
217
+ "ፊ": 213,
218
+ "ፋ": 214,
219
+ "ፌ": 215,
220
+ "ፍ": 216,
221
+ "ፎ": 217,
222
+ "ፏ": 218,
223
+ "ፑ": 219,
224
+ "ፒ": 220,
225
+ "ፓ": 221,
226
+ "ፔ": 222,
227
+ "ፕ": 223,
228
+ "ፖ": 224,
229
+ "፡": 225,
230
+ "።": 226,
231
+ "፣": 227
232
+ }