mittagessen commited on
Commit
45e5021
1 Parent(s): 7129a0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -16,4 +16,50 @@ configs:
16
  data_files:
17
  - split: train
18
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  data_files:
17
  - split: train
18
  path: data/train-*
19
+ pretty_name: OSCAR 2023.1 subset
20
+ license: cc0-1.0
21
+ multilinguality:
22
+ - multilingual
23
+ source_datasets:
24
+ - oscar-corpus/OSCAR-2301
25
+ task_categories:
26
+ - fill-mask
27
+ - text-generation
28
+ task_ids:
29
+ - language-modeling
30
+ paperswithcode_id: oscar
31
+ extra_gated_prompt: >-
32
+ By filling the form below, you understand that only the metadata and the
33
+ annotations of OSCAR 23.01 have a cc0-1.0 license, and that the rest of the
34
+ content is crawled data derived from the November/December 2022 snapshot of
35
+ Common Crawl, for which the authors of OSCAR **do not** hold any copyright
36
+ whatsoever.
37
+ extra_gated_fields:
38
+ Name: text
39
+ Email: text
40
+ Affiliation: text
41
+ Country: text
42
+ Usecase: text
43
+ I have explicitly check with my jurisdiction and I confirm that downloading OSCAR 2301 is legal in the country/region where I am located right now, and for the use case that I have described above: checkbox
44
+ tags:
45
+ - oscar
46
  ---
47
+
48
+ This dataset is a subset of [OSCAR
49
+ 2023.1](https://oscar-project.github.io/documentation/versions/oscar-2301/)
50
+ obtained by sampling randomly 50% of documents from the first 30 JSONL files
51
+ for each language contained in the mother corpus, followed by truncating each
52
+ document to the first 2048 Unicode code points. It thus contains all languages
53
+ in OSCAR but drastically oversamples less frequent languages in comparison to
54
+ larger ones.
55
+
56
+ ### Languages
57
+
58
+ For convenience the languages all files are shipped in a single folder and can
59
+ be loaded together without manually loading invidividual languages.
60
+
61
+ ### Supported Tasks
62
+
63
+ This dataset is primarily intended for pretraining multilingual tiny language
64
+ models with limited context length (~2048 for tokenization-free byte
65
+ embeddings) such as [ByteLlama](https://github.com/mittagessen/bytellama).