Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,644 Bytes
f3c9878
171e604
8aa37b0
 
72d663b
8aa37b0
 
1bcb676
9ccf7bb
fee52da
9ccf7bb
 
b6aaaa6
 
 
be00a03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9ccf7bb
f3c9878
8aa37b0
c3e2446
8aa37b0
26cecac
 
 
 
 
 
 
833e373
 
a4bd319
833e373
 
ffc375f
833e373
f54784c
 
 
833e373
 
 
a4bd319
8aa37b0
833e373
 
 
 
d5d76c9
833e373
8aa37b0
 
833e373
 
 
 
 
d5d76c9
fee52da
8aa37b0
 
 
 
9ccf7bb
8aa37b0
 
 
 
9ccf7bb
8aa37b0
 
9ccf7bb
8aa37b0
 
833e373
9465055
8aa37b0
9ccf7bb
fee52da
 
9ccf7bb
 
 
b6aaaa6
 
 
 
 
 
833e373
 
 
ffc375f
833e373
8aa37b0
f9345ae
10f70a0
 
 
8aa37b0
833e373
 
171e604
 
833e373
8aa37b0
 
 
 
9ccf7bb
8aa37b0
 
 
af6a05e
8aa37b0
b7174e4
 
 
8aa37b0
fe470a1
d5d76c9
40d5be3
fe470a1
8aa37b0
c2dedeb
61720fc
 
c2dedeb
4a92f98
fe470a1
 
 
 
 
 
 
8aa37b0
fe470a1
a943d3a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
license: cc0-1.0
language:
- bal
- bcc
- glk
- brh
- sdh
- kur
- hac
- kiu
- zza
- twi
- fat
- aka

configs:
- config_name: azb_Arab
  data_files: "azb_Arab/azb_Arab.csv"
- config_name: bal_Arab
  data_files: "bal_Arab/bal_Arab.csv"
- config_name: brh_Arab
  data_files: "brh_Arab/brh_Arab.csv"
- config_name: fat_Latn
  data_files: "fat_Latn/fat_Latn.csv"
- config_name: glk_Arab
  data_files: "glk_Arab/glk_Arab.csv"
- config_name: hac_Arab
  data_files: "hac_Arab/hac_Arab.csv"
- config_name: kiu_Latn
  data_files: "kiu_Latn/kiu_Latn.csv"
- config_name: sdh_Arab
  data_files: "sdh_Arab/sdh_Arab.csv"
- config_name: twi_Latn
  data_files: "twi_Latn/twi_Latn.csv"
- config_name: uzs_Arab
  data_files: "uzs_Arab/uzs_Arab.csv"

pretty_name: GlotSparse Corpus
---

# GlotSparse Corpus

Collection of news websites in low-resource languages.

- **Homepage:** [homepage](https://github.com/cisnlp/GlotSparse)
- **Repository:** [github](https://github.com/cisnlp/GlotSparse)
- **Paper:** [paper](https://arxiv.org/abs/2310.16248)
- **Point of Contact:** [email protected]

These languages are supported:

```
('azb_Arab', 'South-Azerbaijani_Arab')
('bal_Arab', 'Balochi_Arab')
('brh_Arab', 'Brahui_Arab')
('fat_Latn', 'Fanti_Latn') # aka
('glk_Arab', 'Gilaki_Arab')
('hac_Arab', 'Gurani_Arab')
('kiu_Latn', 'Kirmanjki_Latn') # zza
('sdh_Arab', 'Southern-Kurdish_Arab')
('twi_Latn', 'Twi_Latn') # aka
('uzs_Arab', 'Southern-Uzbek_Arab')
```

## Usage (HF Loader)
Replace `twi_Latn` with your specific language.
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/GlotSparse', 'twi_Latn')
print(dataset['train'][0]) # First row of Twi_Latn
```

## Download
If you are not a fan of the HF dataloader or are just interested in a specific language, download it directly:
Replace `twi_Latn` with your specific language.

```python
! wget https://huggingface.co/datasets/cis-lmu/GlotSparse/resolve/main/twi_Latn/twi_Latn.csv
```


## Sources

- **Balochi (bal)**
  - News: https://sunnionline.us/balochi/
  - Stories: https://kissah.org/
  - Deiverse Contents such as poems, stories, posts, etc: https://baask.com/archive/category/balochi/

- **Gilaki (glk)**
  - Social Media: The original source of this content is Twitter, but Twitter typically doesn't support Gilaki as part of its language identifier due to gilaki is a low resource language. We obtained this content from a Telegram channel (https://t.me/gilaki_twitter) that re-posts Gilaki Twitter content. The admins of the channel are native Gilaki speakers, and after manual inspection, these tweets are selected. At present, there isn't a readily available mapping for Twitter IDs. The primary reason for reposting Twitter content on Telegram in Iran is the relative ease of access to Telegram compared to Twitter.

- **Brahui (brh)**
  - News: https://talarbrahui.com/category/news/ and https://talarbrahui.com/category/articles/

- **Southern-Kurdish (sdh)**
  - News: https://shafaq.com/ku/ (Feyli)

- **Gurani (hac)**
  - News: https://anfsorani.com/هۆرامی (Hawrami)

- **Kirmanjki (kiu)**
  - News: https://anfkirmancki.com/

- **Fanti (fat)**
  - News: https://akannews.com/fante/
 
- **Twi (twi)**
  - News: https://akannews.com/asante-twi/

- **South-Azerbaijani (azb)**
  - News: https://www.trt.net.tr/turki/
 
- **Southern Uzbek (uzs)**
  - News: https://www.trt.net.tr/afghaniuzbek/

## Tools

To compute the script of each text and removing unwanted langauges we used Glotscript ([code](https://github.com/cisnlp/GlotScript) and [paper](https://arxiv.org/abs/2309.13320)).


## License
We do not own any of the text from which these data has been extracted.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).

If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at [email protected] .


## Ethical Considerations

**1. Biases:** The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for **news sources** and **social medias** (e.g., sunnionline, twitter, ...).

**2. Representativeness:** While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.

**3. Ethics:** We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.

**4. Robots.txt** We respect robots.txt, https://palewi.re/docs/news-homepages/openai-gptbot-robotstxt.html



## Github
We also host a GitHub version with representing similar metadata from other sources:
https://github.com/cisnlp/GlotSparse

## Citation
If you use any part of this code and data in your research, please cite it using the following BibTeX entry.
All the sources related to news, social media, and without mentioned datasets are crawled and compiled in this work.
This work is part of the [GlotLID](https://github.com/cisnlp/GlotLID) project.

```
@inproceedings{
  kargaran2023glotlid,
  title={{GlotLID}: Language Identification for Low-Resource Languages},
  author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
  booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
  year={2023},
  url={https://openreview.net/forum?id=dl4e3EBz5j}
}

```