mehran commited on
Commit
0235104
1 Parent(s): 1f443ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -13
README.md CHANGED
@@ -48,14 +48,16 @@ Each line of a file is a sample formatted in JSON with the following layout:
48
 
49
  ```json
50
  {
51
- "id": <integer>,
52
  "text": "<A Farsi sentence>",
53
- "source": "<One of the aformentioned sources>"
54
  }
55
  ```
56
 
57
  ## Data curation process
58
 
 
 
59
  The value of this dataset is its preprocessing of the text. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the final letter of Farsi alphabet "Ye":
60
 
61
  It has a standalone shape:
@@ -66,22 +68,99 @@ But when surronded with other characters, its middle form is used:
66
 
67
  <pre><font size="7">&#64511;</font></pre>
68
 
69
- This requirement is taken care of by "fonts' substitution table" feature. Which will help show the correct form of the words. But at the same time, some text don't rely on the fonts and use the specific code defined for the middle form directly. From the reader point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters to type Farsi text. Again, since the two languages share very similar alphabets, one can successfully read a text in Farsi while it's been typed by Arabic characters since they look very similar in shape.
70
 
71
- To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to the Farsi counterpart. This is not an exact science but based on the best effort.
72
 
73
- The same could be said about digits and punctuations.
74
 
75
  At the end, any character that can be found in the Jomleh dataset is either:
76
 
77
- - a Farsi alphabet letter (`آ` to `ی`)
78
- - a Farsi digit (`۱` to `۰`)
79
- - a Zero-width non-joiner (`\u200c`)
80
- - a Space
81
- - a Dot/period (`.`)
82
- - an exclamation mark (`!`)
83
- - a Farsi question mark (`؟`)
84
- - a Farsi comma (`،`)
85
 
86
  Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity and the meaning of the sentence, then that sentence is removed from the dataset altogether.
87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ```json
50
  {
51
+ "id": <A sequential integer>,
52
  "text": "<A Farsi sentence>",
53
+ "source": "<One of: []>"
54
  }
55
  ```
56
 
57
  ## Data curation process
58
 
59
+ ### 1. Preprocessing
60
+
61
  The value of this dataset is its preprocessing of the text. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the final letter of Farsi alphabet "Ye":
62
 
63
  It has a standalone shape:
 
68
 
69
  <pre><font size="7">&#64511;</font></pre>
70
 
71
+ This requirement is usually taken care of by "substitution table" feature of the fonts. Which will help show the correct form of the letters. But at the same time, some text don't rely on the fonts and use the specific code designed for the specific form of the letters directly. From the reader point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Again, since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
72
 
73
+ To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
74
 
75
+ The same cleaning process also includes digits and puncuations.
76
 
77
  At the end, any character that can be found in the Jomleh dataset is either:
78
 
79
+ - a Farsi alphabet letter (`ا` to `ی`) or
80
+ - one of the: `آ`, `أ`, `ؤ`, `ئ` or
81
+ - a Farsi digit (`۹` to `۰`) or
82
+ - a zero-width non-joiner (`\u200c`) or
83
+ - a space or
84
+ - one of the Farsi punctuations (`.`, `!`, `؟`, `،`, `؛`)
 
 
85
 
86
  Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity and the meaning of the sentence, then that sentence is removed from the dataset altogether.
87
 
88
+ The script used for the preprocessing can be found [here](/datasets/mlengineer-ai/jomleh/blob/main/preprocess.py).
89
+
90
+ It's also worth mentioning that the preprocessing script will convert the text into vertical format which is expected by the third step (deduplication). Simply put, vertical format replaces spaces with a line feed. And also surround it with a `<doc>` tag. Here's an example sample converted into vertical format:
91
+
92
+ ```
93
+ <doc id="poems_merged.txt_3">
94
+ این
95
+ درگه
96
+ ما
97
+ درگه
98
+ نومیدی
99
+ نیست.
100
+ </doc>
101
+ ```
102
+
103
+ The `id` attribute of the `<doc>` tag points to the file where the sample is coming from.
104
+
105
+ This is the command that executes the preprocessing script:
106
+
107
+ ```
108
+ find 1_prepared -name "*.txt" | parallel 'python ./processing/preprocess.py $(basename {}) < {} > ./2_cleaned_vertical/$(basename {})'
109
+ ```
110
+
111
+ ### 2. Merging into one text file
112
+
113
+ Once the raw source data was preprocessed, they are merged into a single large text file. This can easily be accomplished using a single command:
114
+
115
+ ```
116
+ cat ./2_cleaned_vertical/* > ./3_temp/clean_merged.vert
117
+ ```
118
+
119
+ ### 3. Deduplication
120
+
121
+ Once all the text is transformed into vertical format and saved in a single text file, `onion` program is used to eliminate any duplicate samples. You can find the onion program from [this website](https://corpus.tools/wiki/Onion) and it is used here like this:
122
+
123
+ ```
124
+ onion -sm -n 5 -t 0.5 ./3_temp/clean_merged.vert > ./3_temp/deduplicated.vert
125
+ ```
126
+
127
+ ### 4. Postprocessing
128
+
129
+ The postprocessing involves:
130
+
131
+ 1. Converting back from vertical format into a single line per sample.
132
+ 2. Mapping the file name mentioned in the `id` attribute of the `<doc>` tag into a simpler text which can be found in the [postprocessing script](/datasets/mlengineer-ai/jomleh/blob/main/postprocessing.py).
133
+ 3. Formatting each sample as a JSON-line (one json per line)
134
+ 4. Distributing and saving the sample unifomrly across 60 files, trying to get relatively same number of samples per file.
135
+ 5. Collection some statistics along the way.
136
+
137
+ These steps are run using the following command:
138
+
139
+ ```
140
+ python ./postprocess.py ./3_temp < ./3_temp/deduplicated.vert | parallel "echo '{}' | python ./processing/add_id.py ./3_temp ./jomleh/files"
141
+ ```
142
+
143
+ ### 5. Compressing the files
144
+
145
+ This can be done using the following command:
146
+
147
+ ```
148
+ find ./jomleh/files/*.jsonl -type f | parallel 'zstd {}'
149
+ ```
150
+
151
+ ### 6. Generating the checksum file
152
+
153
+ ```
154
+ ls ./jomleh/files/*.zst | sort -t _ -k 2 -n | xargs sha256sum > ./jomleh/files/checksum.sha256
155
+ ```
156
+
157
+ After applying all these steps, we are left a dataset with these characteristics:
158
+
159
+ | | Some statistics on the collected sentences |
160
+ |---:|:---|
161
+ | Total number of sentences: | 123 |
162
+ | Average number of characters in a sentence: | 123 |
163
+ | Average number of words in a sentence: | 123 |
164
+ | Standard devitaion for the number of words in a sentence: | 123 |
165
+ | Average number of letters in a word: | 123 |
166
+ | Standard devitaion for the number of letters in a word: | 123 |