Datasets:
Commit
•
8d258c6
1
Parent(s):
38f87b7
Update README.md
Browse files
README.md
CHANGED
@@ -66,20 +66,63 @@ This dataset card aims to be a base template for new datasets. It has been gener
|
|
66 |
|
67 |
### Direct Use
|
68 |
|
|
|
|
|
69 |
```python
|
70 |
-
from
|
71 |
-
|
|
|
|
|
|
|
|
|
72 |
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
parquet_files = [f for f in files if f.endswith(".parquet")]
|
76 |
-
parquet_files_filtered_for_lang = [
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
```
|
82 |
|
|
|
|
|
83 |
```python
|
84 |
ds = load_dataset("parquet", data_files=get_files_for_lang_and_years(['fr']), num_proc=4)
|
85 |
```
|
|
|
66 |
|
67 |
### Direct Use
|
68 |
|
69 |
+
To download the full dataset using the `Datasets` library you can do the following
|
70 |
+
|
71 |
```python
|
72 |
+
from datasets import load_dataset
|
73 |
+
|
74 |
+
dataset = load_dataset("biglam/europeana_newspapers")
|
75 |
+
```
|
76 |
+
|
77 |
+
You can also access a subset based on language or decade ranges using the following function.
|
78 |
|
79 |
+
```python
|
80 |
+
from typing import List, Optional, Literal, Union
|
81 |
+
from huggingface_hub import hf_hub_url, list_repo_files
|
82 |
+
|
83 |
+
LanguageOption = Literal[
|
84 |
+
"et",
|
85 |
+
"pl",
|
86 |
+
"sr",
|
87 |
+
"ru",
|
88 |
+
"sv",
|
89 |
+
"no_language_found",
|
90 |
+
"ji",
|
91 |
+
"hr",
|
92 |
+
"el",
|
93 |
+
"uk",
|
94 |
+
"fr",
|
95 |
+
"fi",
|
96 |
+
"de",
|
97 |
+
"multi_language",
|
98 |
+
]
|
99 |
+
|
100 |
+
|
101 |
+
def get_files_for_lang_and_years(
|
102 |
+
languages: Union[None, List[LanguageOption]] = None,
|
103 |
+
min_year: Optional[int] = None,
|
104 |
+
max_year: Optional[int] = None,
|
105 |
+
):
|
106 |
+
files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
|
107 |
parquet_files = [f for f in files if f.endswith(".parquet")]
|
108 |
+
parquet_files_filtered_for_lang = [
|
109 |
+
f for f in parquet_files if any(lang in f for lang in ["uk", "fr"])
|
110 |
+
]
|
111 |
+
filtered_files = [
|
112 |
+
f
|
113 |
+
for f in parquet_files
|
114 |
+
if (min_year is None or min_year <= int(f.split("-")[1].split(".")[0]))
|
115 |
+
and (max_year is None or int(f.split("-")[1].split(".")[0]) <= max_year)
|
116 |
+
]
|
117 |
+
return [
|
118 |
+
hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
|
119 |
+
for f in filtered_files
|
120 |
+
]
|
121 |
+
|
122 |
```
|
123 |
|
124 |
+
This function takes a list of language codes, and a min, max value for decades you want to include. You can can use this function to get the URLs for files you want to download from the Hub:
|
125 |
+
|
126 |
```python
|
127 |
ds = load_dataset("parquet", data_files=get_files_for_lang_and_years(['fr']), num_proc=4)
|
128 |
```
|