url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6028/comments | https://api.github.com/repos/huggingface/datasets/issues/6028/events | https://github.com/huggingface/datasets/pull/6028 | 1,803,294,981 | PR_kwDODunzps5Vb3LJ | 6,028 | Use new hffs | [] | closed | false | null | 13 | 2023-07-13T15:41:44Z | 2023-07-17T17:09:39Z | 2023-07-17T17:01:00Z | null | Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem.
Switching to `HfFileSystem` will help implementing optimization in data files resolution
## Implementation details
I replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:// paths, https:// URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution.
I added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used.
I also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface.
## New features
hf:// paths are now supported in data_files
## Breaking changes
DataFilesList and DataFilesDict:
- use `str` paths instead of `Union[Path, Url]`
- require posix paths for windows paths
close https://github.com/huggingface/datasets/issues/6017 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6028/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6028/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6028.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6028",
"merged_at": "2023-07-17T17:01:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6028.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6028"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006665 / 0.011353 (-0.004688) | 0.004376 / 0.011008 (-0.006633) | 0.085529 / 0.038508 (0.047021) | 0.076372 / 0.023109 (0.053263) | 0.310019 / 0.275898 (0.034121) | 0.341404 / 0.323480 (0.017924) | 0.005666 / 0.007986 (-0.002320) | 0.003763 / 0.004328 (-0.000566) | 0.064678 / 0.004250 (0.060427) | 0.059283 / 0.037052 (0.022231) | 0.316194 / 0.258489 (0.057704) | 0.349397 / 0.293841 (0.055557) | 0.031199 / 0.128546 (-0.097347) | 0.008724 / 0.075646 (-0.066923) | 0.300236 / 0.419271 (-0.119035) | 0.068872 / 0.043533 (0.025339) | 0.308521 / 0.255139 (0.053382) | 0.331292 / 0.283200 (0.048092) | 0.028236 / 0.141683 (-0.113447) | 1.501365 / 1.452155 (0.049211) | 1.554334 / 1.492716 (0.061618) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238291 / 0.018006 (0.220285) | 0.565069 / 0.000490 (0.564580) | 0.001626 / 0.000200 (0.001426) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029777 / 0.037411 (-0.007634) | 0.082873 / 0.014526 (0.068347) | 0.099619 / 0.176557 (-0.076937) | 0.156572 / 0.737135 (-0.580563) | 0.099887 / 0.296338 (-0.196452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401017 / 0.215209 (0.185808) | 3.827192 / 2.077655 (1.749537) | 1.861554 / 1.504120 (0.357434) | 1.699869 / 1.541195 (0.158674) | 1.720043 / 1.468490 (0.251553) | 0.486757 / 4.584777 (-4.098020) | 3.638125 / 3.745712 (-0.107587) | 5.844959 / 5.269862 (0.575097) | 3.454901 / 4.565676 (-1.110775) | 0.057650 / 0.424275 (-0.366625) | 0.007341 / 0.007607 (-0.000266) | 0.462698 / 0.226044 (0.236654) | 4.633472 / 2.268929 (2.364544) | 2.287607 / 55.444624 (-53.157017) | 2.057318 / 6.876477 (-4.819159) | 2.203657 / 2.142072 (0.061584) | 0.598136 / 4.805227 (-4.207091) | 0.134012 / 6.500664 (-6.366653) | 0.060824 / 0.075469 (-0.014645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277752 / 1.841788 (-0.564036) | 20.013398 / 8.074308 (11.939089) | 14.372993 / 10.191392 (4.181601) | 0.169991 / 0.680424 (-0.510433) | 0.018344 / 0.534201 (-0.515857) | 0.396985 / 0.579283 (-0.182299) | 0.416289 / 0.434364 (-0.018075) | 0.458658 / 0.540337 (-0.081680) | 0.692980 / 1.386936 (-0.693956) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006689 / 0.011353 (-0.004664) | 0.004393 / 0.011008 (-0.006615) | 0.064069 / 0.038508 (0.025561) | 0.080717 / 0.023109 (0.057607) | 0.370090 / 0.275898 (0.094191) | 0.400432 / 0.323480 (0.076952) | 0.005613 / 0.007986 (-0.002372) | 0.003641 / 0.004328 (-0.000687) | 0.064771 / 0.004250 (0.060520) | 0.057555 / 0.037052 (0.020502) | 0.392156 / 0.258489 (0.133667) | 0.409842 / 0.293841 (0.116001) | 0.031500 / 0.128546 (-0.097047) | 0.008786 / 0.075646 (-0.066860) | 0.070342 / 0.419271 (-0.348929) | 0.048646 / 0.043533 (0.005113) | 0.360914 / 0.255139 (0.105775) | 0.387626 / 0.283200 (0.104426) | 0.022787 / 0.141683 (-0.118896) | 1.508915 / 1.452155 (0.056761) | 1.539719 / 1.492716 (0.047002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257985 / 0.018006 (0.239979) | 0.550990 / 0.000490 (0.550501) | 0.000407 / 0.000200 (0.000207) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030183 / 0.037411 (-0.007228) | 0.086882 / 0.014526 (0.072356) | 0.102382 / 0.176557 (-0.074175) | 0.154745 / 0.737135 (-0.582390) | 0.104008 / 0.296338 (-0.192331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426284 / 0.215209 (0.211075) | 4.240812 / 2.077655 (2.163158) | 2.261240 / 1.504120 (0.757120) | 2.085905 / 1.541195 (0.544710) | 2.160374 / 1.468490 (0.691883) | 0.481126 / 4.584777 (-4.103651) | 3.516234 / 3.745712 (-0.229478) | 3.325322 / 5.269862 (-1.944539) | 2.043307 / 4.565676 (-2.522369) | 0.056663 / 0.424275 (-0.367612) | 0.007786 / 0.007607 (0.000179) | 0.497614 / 0.226044 (0.271570) | 4.974529 / 2.268929 (2.705600) | 2.700018 / 55.444624 (-52.744606) | 2.393778 / 6.876477 (-4.482699) | 2.628202 / 2.142072 (0.486130) | 0.594316 / 4.805227 (-4.210911) | 0.147092 / 6.500664 (-6.353572) | 0.062207 / 0.075469 (-0.013262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.315676 / 1.841788 (-0.526112) | 20.749251 / 8.074308 (12.674943) | 14.371553 / 10.191392 (4.180160) | 0.170249 / 0.680424 (-0.510175) | 0.018478 / 0.534201 (-0.515722) | 0.395710 / 0.579283 (-0.183573) | 0.409706 / 0.434364 (-0.024658) | 0.463454 / 0.540337 (-0.076884) | 0.615657 / 1.386936 (-0.771279) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5a752d8e8ca0a6ed118b024ba03c1b4a2881177 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007224 / 0.011353 (-0.004129) | 0.004506 / 0.011008 (-0.006503) | 0.096729 / 0.038508 (0.058221) | 0.082394 / 0.023109 (0.059284) | 0.390954 / 0.275898 (0.115056) | 0.416647 / 0.323480 (0.093167) | 0.005894 / 0.007986 (-0.002092) | 0.003756 / 0.004328 (-0.000572) | 0.075800 / 0.004250 (0.071549) | 0.062683 / 0.037052 (0.025631) | 0.398959 / 0.258489 (0.140470) | 0.436624 / 0.293841 (0.142783) | 0.034650 / 0.128546 (-0.093896) | 0.009655 / 0.075646 (-0.065991) | 0.315761 / 0.419271 (-0.103511) | 0.060957 / 0.043533 (0.017424) | 0.385649 / 0.255139 (0.130510) | 0.394022 / 0.283200 (0.110822) | 0.024601 / 0.141683 (-0.117082) | 1.729586 / 1.452155 (0.277431) | 1.724153 / 1.492716 (0.231437) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207070 / 0.018006 (0.189063) | 0.466502 / 0.000490 (0.466012) | 0.010739 / 0.000200 (0.010540) | 0.000214 / 0.000054 (0.000160) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031633 / 0.037411 (-0.005779) | 0.095345 / 0.014526 (0.080819) | 0.105399 / 0.176557 (-0.071157) | 0.174173 / 0.737135 (-0.562962) | 0.104207 / 0.296338 (-0.192132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435312 / 0.215209 (0.220103) | 4.265600 / 2.077655 (2.187946) | 2.056500 / 1.504120 (0.552380) | 1.848023 / 1.541195 (0.306828) | 1.946156 / 1.468490 (0.477666) | 0.557788 / 4.584777 (-4.026989) | 4.070289 / 3.745712 (0.324577) | 3.608027 / 5.269862 (-1.661835) | 2.214556 / 4.565676 (-2.351121) | 0.062623 / 0.424275 (-0.361652) | 0.008083 / 0.007607 (0.000476) | 0.491782 / 0.226044 (0.265738) | 4.989963 / 2.268929 (2.721035) | 2.575867 / 55.444624 (-52.868757) | 2.208045 / 6.876477 (-4.668431) | 2.364184 / 2.142072 (0.222112) | 0.633925 / 4.805227 (-4.171302) | 0.144323 / 6.500664 (-6.356341) | 0.067505 / 0.075469 (-0.007965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.467219 / 1.841788 (-0.374569) | 22.334967 / 8.074308 (14.260659) | 15.715747 / 10.191392 (5.524355) | 0.175443 / 0.680424 (-0.504980) | 0.026165 / 0.534201 (-0.508036) | 0.490675 / 0.579283 (-0.088608) | 0.509211 / 0.434364 (0.074847) | 0.586303 / 0.540337 (0.045965) | 0.785052 / 1.386936 (-0.601884) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007893 / 0.011353 (-0.003460) | 0.004577 / 0.011008 (-0.006431) | 0.075781 / 0.038508 (0.037273) | 0.095492 / 0.023109 (0.072382) | 0.433259 / 0.275898 (0.157361) | 0.469386 / 0.323480 (0.145906) | 0.006317 / 0.007986 (-0.001669) | 0.003708 / 0.004328 (-0.000621) | 0.074417 / 0.004250 (0.070167) | 0.068605 / 0.037052 (0.031552) | 0.448701 / 0.258489 (0.190212) | 0.469131 / 0.293841 (0.175290) | 0.036647 / 0.128546 (-0.091899) | 0.010077 / 0.075646 (-0.065570) | 0.082457 / 0.419271 (-0.336815) | 0.063255 / 0.043533 (0.019722) | 0.428144 / 0.255139 (0.173005) | 0.451872 / 0.283200 (0.168672) | 0.033953 / 0.141683 (-0.107730) | 1.781752 / 1.452155 (0.329597) | 1.869014 / 1.492716 (0.376297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223596 / 0.018006 (0.205590) | 0.470307 / 0.000490 (0.469818) | 0.005059 / 0.000200 (0.004859) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038804 / 0.037411 (0.001393) | 0.117879 / 0.014526 (0.103353) | 0.140701 / 0.176557 (-0.035855) | 0.194672 / 0.737135 (-0.542463) | 0.132806 / 0.296338 (-0.163533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510109 / 0.215209 (0.294900) | 4.729457 / 2.077655 (2.651803) | 2.512113 / 1.504120 (1.007993) | 2.302553 / 1.541195 (0.761358) | 2.420462 / 1.468490 (0.951972) | 0.531682 / 4.584777 (-4.053095) | 4.061208 / 3.745712 (0.315496) | 3.588542 / 5.269862 (-1.681320) | 2.203187 / 4.565676 (-2.362489) | 0.065791 / 0.424275 (-0.358484) | 0.008839 / 0.007607 (0.001232) | 0.562041 / 0.226044 (0.335997) | 5.702340 / 2.268929 (3.433412) | 3.127609 / 55.444624 (-52.317015) | 2.823060 / 6.876477 (-4.053417) | 2.898675 / 2.142072 (0.756603) | 0.659589 / 4.805227 (-4.145638) | 0.148798 / 6.500664 (-6.351866) | 0.070787 / 0.075469 (-0.004682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.478317 / 1.841788 (-0.363471) | 21.995400 / 8.074308 (13.921092) | 16.770729 / 10.191392 (6.579337) | 0.226333 / 0.680424 (-0.454091) | 0.021835 / 0.534201 (-0.512366) | 0.460373 / 0.579283 (-0.118910) | 0.479494 / 0.434364 (0.045130) | 0.529470 / 0.540337 (-0.010868) | 0.718066 / 1.386936 (-0.668870) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9a717b8eb80b0e50b25818127f79a35e0866fb14 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007824 / 0.011353 (-0.003529) | 0.004601 / 0.011008 (-0.006407) | 0.100025 / 0.038508 (0.061517) | 0.096046 / 0.023109 (0.072936) | 0.376226 / 0.275898 (0.100328) | 0.410905 / 0.323480 (0.087425) | 0.006048 / 0.007986 (-0.001938) | 0.003817 / 0.004328 (-0.000511) | 0.076624 / 0.004250 (0.072374) | 0.066390 / 0.037052 (0.029338) | 0.380098 / 0.258489 (0.121609) | 0.413603 / 0.293841 (0.119762) | 0.036546 / 0.128546 (-0.092001) | 0.009881 / 0.075646 (-0.065765) | 0.344338 / 0.419271 (-0.074934) | 0.061882 / 0.043533 (0.018350) | 0.368568 / 0.255139 (0.113429) | 0.397133 / 0.283200 (0.113934) | 0.027255 / 0.141683 (-0.114428) | 1.795099 / 1.452155 (0.342945) | 1.852443 / 1.492716 (0.359727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247436 / 0.018006 (0.229430) | 0.494119 / 0.000490 (0.493629) | 0.004359 / 0.000200 (0.004159) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034765 / 0.037411 (-0.002647) | 0.104541 / 0.014526 (0.090015) | 0.113898 / 0.176557 (-0.062659) | 0.183634 / 0.737135 (-0.553501) | 0.116423 / 0.296338 (-0.179916) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458747 / 0.215209 (0.243538) | 4.555740 / 2.077655 (2.478085) | 2.217240 / 1.504120 (0.713121) | 2.039879 / 1.541195 (0.498684) | 2.088581 / 1.468490 (0.620091) | 0.588063 / 4.584777 (-3.996714) | 4.238226 / 3.745712 (0.492514) | 4.768060 / 5.269862 (-0.501802) | 2.857117 / 4.565676 (-1.708560) | 0.068742 / 0.424275 (-0.355533) | 0.008667 / 0.007607 (0.001059) | 0.549294 / 0.226044 (0.323249) | 5.464635 / 2.268929 (3.195706) | 2.744435 / 55.444624 (-52.700189) | 2.347660 / 6.876477 (-4.528816) | 2.616816 / 2.142072 (0.474743) | 0.703701 / 4.805227 (-4.101526) | 0.159749 / 6.500664 (-6.340915) | 0.071990 / 0.075469 (-0.003479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.486599 / 1.841788 (-0.355188) | 22.745438 / 8.074308 (14.671130) | 16.822332 / 10.191392 (6.630940) | 0.184730 / 0.680424 (-0.495694) | 0.021267 / 0.534201 (-0.512934) | 0.467108 / 0.579283 (-0.112176) | 0.472674 / 0.434364 (0.038311) | 0.548094 / 0.540337 (0.007756) | 0.735885 / 1.386936 (-0.651051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007746 / 0.011353 (-0.003607) | 0.004585 / 0.011008 (-0.006423) | 0.076943 / 0.038508 (0.038435) | 0.087473 / 0.023109 (0.064363) | 0.480099 / 0.275898 (0.204201) | 0.495271 / 0.323480 (0.171791) | 0.006348 / 0.007986 (-0.001638) | 0.003902 / 0.004328 (-0.000426) | 0.077586 / 0.004250 (0.073335) | 0.066467 / 0.037052 (0.029415) | 0.468741 / 0.258489 (0.210252) | 0.506778 / 0.293841 (0.212937) | 0.036877 / 0.128546 (-0.091669) | 0.010102 / 0.075646 (-0.065545) | 0.084419 / 0.419271 (-0.334852) | 0.058721 / 0.043533 (0.015188) | 0.453633 / 0.255139 (0.198494) | 0.481171 / 0.283200 (0.197971) | 0.028716 / 0.141683 (-0.112967) | 1.853048 / 1.452155 (0.400893) | 1.885847 / 1.492716 (0.393130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.484481 / 0.000490 (0.483991) | 0.002951 / 0.000200 (0.002751) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037949 / 0.037411 (0.000538) | 0.108364 / 0.014526 (0.093838) | 0.119542 / 0.176557 (-0.057014) | 0.188542 / 0.737135 (-0.548593) | 0.122011 / 0.296338 (-0.174327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483135 / 0.215209 (0.267926) | 4.849715 / 2.077655 (2.772060) | 2.497736 / 1.504120 (0.993616) | 2.314243 / 1.541195 (0.773048) | 2.412739 / 1.468490 (0.944249) | 0.564137 / 4.584777 (-4.020639) | 4.242273 / 3.745712 (0.496561) | 6.337843 / 5.269862 (1.067982) | 3.923250 / 4.565676 (-0.642426) | 0.066464 / 0.424275 (-0.357811) | 0.009217 / 0.007607 (0.001610) | 0.575667 / 0.226044 (0.349623) | 5.746187 / 2.268929 (3.477258) | 3.069655 / 55.444624 (-52.374969) | 2.674798 / 6.876477 (-4.201679) | 2.956535 / 2.142072 (0.814463) | 0.701043 / 4.805227 (-4.104185) | 0.157241 / 6.500664 (-6.343423) | 0.073175 / 0.075469 (-0.002294) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609943 / 1.841788 (-0.231844) | 23.478594 / 8.074308 (15.404286) | 17.454437 / 10.191392 (7.263045) | 0.186422 / 0.680424 (-0.494002) | 0.021703 / 0.534201 (-0.512498) | 0.471704 / 0.579283 (-0.107579) | 0.480553 / 0.434364 (0.046189) | 0.552881 / 0.540337 (0.012544) | 0.722515 / 1.386936 (-0.664421) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84645f80049cd00d9e0d4908faf3c3203fdcf21d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007542 / 0.011353 (-0.003811) | 0.004692 / 0.011008 (-0.006316) | 0.099155 / 0.038508 (0.060647) | 0.089365 / 0.023109 (0.066256) | 0.370870 / 0.275898 (0.094972) | 0.422152 / 0.323480 (0.098673) | 0.006223 / 0.007986 (-0.001763) | 0.003852 / 0.004328 (-0.000476) | 0.075438 / 0.004250 (0.071188) | 0.065973 / 0.037052 (0.028921) | 0.381513 / 0.258489 (0.123024) | 0.416196 / 0.293841 (0.122355) | 0.035483 / 0.128546 (-0.093063) | 0.009884 / 0.075646 (-0.065762) | 0.341290 / 0.419271 (-0.077982) | 0.060546 / 0.043533 (0.017014) | 0.365101 / 0.255139 (0.109962) | 0.391058 / 0.283200 (0.107859) | 0.026325 / 0.141683 (-0.115358) | 1.815168 / 1.452155 (0.363013) | 1.834711 / 1.492716 (0.341994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222177 / 0.018006 (0.204171) | 0.501151 / 0.000490 (0.500662) | 0.010202 / 0.000200 (0.010002) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034043 / 0.037411 (-0.003368) | 0.097884 / 0.014526 (0.083358) | 0.114022 / 0.176557 (-0.062534) | 0.186200 / 0.737135 (-0.550935) | 0.115555 / 0.296338 (-0.180783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485857 / 0.215209 (0.270648) | 4.959263 / 2.077655 (2.881608) | 2.501085 / 1.504120 (0.996965) | 2.234660 / 1.541195 (0.693465) | 2.238585 / 1.468490 (0.770095) | 0.645431 / 4.584777 (-3.939345) | 4.434311 / 3.745712 (0.688599) | 4.771491 / 5.269862 (-0.498371) | 2.778963 / 4.565676 (-1.786714) | 0.075615 / 0.424275 (-0.348660) | 0.009502 / 0.007607 (0.001895) | 0.546539 / 0.226044 (0.320495) | 5.464242 / 2.268929 (3.195314) | 2.894101 / 55.444624 (-52.550524) | 2.513761 / 6.876477 (-4.362715) | 2.719843 / 2.142072 (0.577770) | 0.678828 / 4.805227 (-4.126399) | 0.157839 / 6.500664 (-6.342825) | 0.071305 / 0.075469 (-0.004164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.496879 / 1.841788 (-0.344909) | 22.214452 / 8.074308 (14.140144) | 17.707541 / 10.191392 (7.516149) | 0.197008 / 0.680424 (-0.483416) | 0.024883 / 0.534201 (-0.509318) | 0.493611 / 0.579283 (-0.085672) | 0.500677 / 0.434364 (0.066313) | 0.569381 / 0.540337 (0.029044) | 0.773950 / 1.386936 (-0.612986) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007337 / 0.011353 (-0.004015) | 0.004572 / 0.011008 (-0.006436) | 0.091123 / 0.038508 (0.052615) | 0.079762 / 0.023109 (0.056652) | 0.450527 / 0.275898 (0.174629) | 0.525097 / 0.323480 (0.201617) | 0.005873 / 0.007986 (-0.002112) | 0.003797 / 0.004328 (-0.000532) | 0.076259 / 0.004250 (0.072009) | 0.062745 / 0.037052 (0.025692) | 0.465553 / 0.258489 (0.207064) | 0.546026 / 0.293841 (0.252186) | 0.035638 / 0.128546 (-0.092909) | 0.010086 / 0.075646 (-0.065560) | 0.109269 / 0.419271 (-0.310002) | 0.056765 / 0.043533 (0.013233) | 0.440887 / 0.255139 (0.185748) | 0.513325 / 0.283200 (0.230125) | 0.027206 / 0.141683 (-0.114476) | 1.863564 / 1.452155 (0.411409) | 1.918206 / 1.492716 (0.425490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266479 / 0.018006 (0.248473) | 0.487971 / 0.000490 (0.487481) | 0.012246 / 0.000200 (0.012046) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035281 / 0.037411 (-0.002130) | 0.102991 / 0.014526 (0.088465) | 0.114638 / 0.176557 (-0.061919) | 0.184117 / 0.737135 (-0.553018) | 0.117943 / 0.296338 (-0.178396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.497897 / 0.215209 (0.282688) | 4.973806 / 2.077655 (2.896151) | 2.596146 / 1.504120 (1.092026) | 2.419694 / 1.541195 (0.878499) | 2.525784 / 1.468490 (1.057294) | 0.568021 / 4.584777 (-4.016756) | 4.296431 / 3.745712 (0.550719) | 3.690682 / 5.269862 (-1.579179) | 2.345965 / 4.565676 (-2.219712) | 0.066859 / 0.424275 (-0.357416) | 0.009093 / 0.007607 (0.001486) | 0.582616 / 0.226044 (0.356571) | 5.826528 / 2.268929 (3.557600) | 3.253222 / 55.444624 (-52.191403) | 2.798447 / 6.876477 (-4.078030) | 3.054609 / 2.142072 (0.912537) | 0.678816 / 4.805227 (-4.126411) | 0.157966 / 6.500664 (-6.342698) | 0.073797 / 0.075469 (-0.001672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599480 / 1.841788 (-0.242308) | 23.249738 / 8.074308 (15.175430) | 16.965406 / 10.191392 (6.774014) | 0.171390 / 0.680424 (-0.509034) | 0.021810 / 0.534201 (-0.512391) | 0.483339 / 0.579283 (-0.095944) | 0.496615 / 0.434364 (0.062251) | 0.583786 / 0.540337 (0.043448) | 0.741699 / 1.386936 (-0.645237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7935cd2e564f5d1c66ed1acf731703724ba7a287 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006054 / 0.011353 (-0.005299) | 0.003706 / 0.011008 (-0.007302) | 0.080060 / 0.038508 (0.041552) | 0.061479 / 0.023109 (0.038370) | 0.327981 / 0.275898 (0.052083) | 0.356930 / 0.323480 (0.033450) | 0.004671 / 0.007986 (-0.003315) | 0.002901 / 0.004328 (-0.001428) | 0.062425 / 0.004250 (0.058174) | 0.046310 / 0.037052 (0.009258) | 0.323657 / 0.258489 (0.065168) | 0.370130 / 0.293841 (0.076289) | 0.027151 / 0.128546 (-0.101395) | 0.007850 / 0.075646 (-0.067797) | 0.262300 / 0.419271 (-0.156971) | 0.045456 / 0.043533 (0.001923) | 0.325569 / 0.255139 (0.070430) | 0.352962 / 0.283200 (0.069762) | 0.020156 / 0.141683 (-0.121527) | 1.429404 / 1.452155 (-0.022750) | 1.615032 / 1.492716 (0.122316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187309 / 0.018006 (0.169303) | 0.428848 / 0.000490 (0.428358) | 0.003599 / 0.000200 (0.003399) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023260 / 0.037411 (-0.014151) | 0.072467 / 0.014526 (0.057941) | 0.082398 / 0.176557 (-0.094159) | 0.142573 / 0.737135 (-0.594562) | 0.082570 / 0.296338 (-0.213768) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426503 / 0.215209 (0.211294) | 4.267875 / 2.077655 (2.190220) | 2.189762 / 1.504120 (0.685642) | 2.027992 / 1.541195 (0.486798) | 2.053211 / 1.468490 (0.584721) | 0.503850 / 4.584777 (-4.080927) | 3.086444 / 3.745712 (-0.659268) | 3.319492 / 5.269862 (-1.950370) | 2.070714 / 4.565676 (-2.494962) | 0.057591 / 0.424275 (-0.366684) | 0.006407 / 0.007607 (-0.001200) | 0.501145 / 0.226044 (0.275100) | 5.017753 / 2.268929 (2.748825) | 2.643145 / 55.444624 (-52.801479) | 2.327440 / 6.876477 (-4.549037) | 2.460250 / 2.142072 (0.318178) | 0.589397 / 4.805227 (-4.215830) | 0.124948 / 6.500664 (-6.375716) | 0.060450 / 0.075469 (-0.015020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279870 / 1.841788 (-0.561918) | 18.115908 / 8.074308 (10.041600) | 13.570032 / 10.191392 (3.378640) | 0.132981 / 0.680424 (-0.547442) | 0.016942 / 0.534201 (-0.517259) | 0.333591 / 0.579283 (-0.245692) | 0.358844 / 0.434364 (-0.075520) | 0.395748 / 0.540337 (-0.144590) | 0.546213 / 1.386936 (-0.840723) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006062 / 0.011353 (-0.005291) | 0.003673 / 0.011008 (-0.007336) | 0.064726 / 0.038508 (0.026218) | 0.061854 / 0.023109 (0.038745) | 0.385343 / 0.275898 (0.109445) | 0.441284 / 0.323480 (0.117805) | 0.004830 / 0.007986 (-0.003156) | 0.002909 / 0.004328 (-0.001420) | 0.063874 / 0.004250 (0.059624) | 0.049331 / 0.037052 (0.012278) | 0.418484 / 0.258489 (0.159995) | 0.451397 / 0.293841 (0.157556) | 0.027665 / 0.128546 (-0.100881) | 0.008088 / 0.075646 (-0.067558) | 0.069625 / 0.419271 (-0.349646) | 0.043437 / 0.043533 (-0.000095) | 0.359789 / 0.255139 (0.104650) | 0.430206 / 0.283200 (0.147007) | 0.022308 / 0.141683 (-0.119375) | 1.461030 / 1.452155 (0.008875) | 1.513683 / 1.492716 (0.020966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230958 / 0.018006 (0.212952) | 0.417553 / 0.000490 (0.417063) | 0.000802 / 0.000200 (0.000602) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025421 / 0.037411 (-0.011991) | 0.077156 / 0.014526 (0.062630) | 0.087533 / 0.176557 (-0.089024) | 0.138048 / 0.737135 (-0.599087) | 0.089358 / 0.296338 (-0.206981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439172 / 0.215209 (0.223963) | 4.409509 / 2.077655 (2.331854) | 2.491270 / 1.504120 (0.987150) | 2.308446 / 1.541195 (0.767252) | 2.378440 / 1.468490 (0.909950) | 0.499834 / 4.584777 (-4.084943) | 3.083168 / 3.745712 (-0.662544) | 2.867543 / 5.269862 (-2.402318) | 1.876354 / 4.565676 (-2.689323) | 0.057092 / 0.424275 (-0.367183) | 0.006955 / 0.007607 (-0.000653) | 0.513799 / 0.226044 (0.287754) | 5.126660 / 2.268929 (2.857731) | 2.917348 / 55.444624 (-52.527277) | 2.508035 / 6.876477 (-4.368441) | 2.698089 / 2.142072 (0.556016) | 0.586828 / 4.805227 (-4.218399) | 0.124740 / 6.500664 (-6.375924) | 0.062276 / 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291624 / 1.841788 (-0.550164) | 18.199968 / 8.074308 (10.125660) | 13.888139 / 10.191392 (3.696747) | 0.162955 / 0.680424 (-0.517469) | 0.017343 / 0.534201 (-0.516858) | 0.334683 / 0.579283 (-0.244600) | 0.352708 / 0.434364 (-0.081656) | 0.400629 / 0.540337 (-0.139708) | 0.539497 / 1.386936 (-0.847439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7976db7fe22c6b93a869488d07b8137ea6a0db4 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007500 / 0.011353 (-0.003853) | 0.004498 / 0.011008 (-0.006510) | 0.100239 / 0.038508 (0.061731) | 0.083424 / 0.023109 (0.060315) | 0.366664 / 0.275898 (0.090766) | 0.406641 / 0.323480 (0.083161) | 0.004577 / 0.007986 (-0.003409) | 0.004809 / 0.004328 (0.000480) | 0.076898 / 0.004250 (0.072647) | 0.064021 / 0.037052 (0.026969) | 0.375836 / 0.258489 (0.117347) | 0.413008 / 0.293841 (0.119167) | 0.036010 / 0.128546 (-0.092537) | 0.009655 / 0.075646 (-0.065991) | 0.342595 / 0.419271 (-0.076677) | 0.061846 / 0.043533 (0.018313) | 0.376543 / 0.255139 (0.121404) | 0.395858 / 0.283200 (0.112659) | 0.026792 / 0.141683 (-0.114891) | 1.775569 / 1.452155 (0.323414) | 1.865077 / 1.492716 (0.372360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221521 / 0.018006 (0.203514) | 0.474604 / 0.000490 (0.474114) | 0.004354 / 0.000200 (0.004154) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032947 / 0.037411 (-0.004464) | 0.100454 / 0.014526 (0.085928) | 0.111955 / 0.176557 (-0.064602) | 0.179752 / 0.737135 (-0.557383) | 0.114282 / 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458261 / 0.215209 (0.243052) | 4.563536 / 2.077655 (2.485881) | 2.231928 / 1.504120 (0.727808) | 2.036751 / 1.541195 (0.495556) | 2.170413 / 1.468490 (0.701923) | 0.570825 / 4.584777 (-4.013952) | 4.505762 / 3.745712 (0.760050) | 5.033461 / 5.269862 (-0.236401) | 2.704989 / 4.565676 (-1.860687) | 0.067011 / 0.424275 (-0.357264) | 0.008568 / 0.007607 (0.000961) | 0.545151 / 0.226044 (0.319106) | 5.438984 / 2.268929 (3.170055) | 2.771818 / 55.444624 (-52.672806) | 2.393082 / 6.876477 (-4.483395) | 2.467173 / 2.142072 (0.325101) | 0.678849 / 4.805227 (-4.126379) | 0.160480 / 6.500664 (-6.340184) | 0.073681 / 0.075469 (-0.001788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532272 / 1.841788 (-0.309516) | 22.548741 / 8.074308 (14.474433) | 17.091044 / 10.191392 (6.899652) | 0.172100 / 0.680424 (-0.508324) | 0.022220 / 0.534201 (-0.511981) | 0.467871 / 0.579283 (-0.111412) | 0.491135 / 0.434364 (0.056771) | 0.548433 / 0.540337 (0.008096) | 0.733340 / 1.386936 (-0.653596) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007593 / 0.011353 (-0.003760) | 0.004656 / 0.011008 (-0.006352) | 0.076940 / 0.038508 (0.038431) | 0.085183 / 0.023109 (0.062073) | 0.447178 / 0.275898 (0.171280) | 0.469545 / 0.323480 (0.146065) | 0.006023 / 0.007986 (-0.001962) | 0.003808 / 0.004328 (-0.000520) | 0.076767 / 0.004250 (0.072517) | 0.065713 / 0.037052 (0.028661) | 0.445573 / 0.258489 (0.187084) | 0.481689 / 0.293841 (0.187848) | 0.036893 / 0.128546 (-0.091654) | 0.009976 / 0.075646 (-0.065670) | 0.084443 / 0.419271 (-0.334829) | 0.058829 / 0.043533 (0.015297) | 0.429291 / 0.255139 (0.174152) | 0.454016 / 0.283200 (0.170816) | 0.027289 / 0.141683 (-0.114394) | 1.806786 / 1.452155 (0.354632) | 1.887680 / 1.492716 (0.394964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241012 / 0.018006 (0.223006) | 0.470629 / 0.000490 (0.470139) | 0.003213 / 0.000200 (0.003013) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036896 / 0.037411 (-0.000515) | 0.106932 / 0.014526 (0.092406) | 0.120333 / 0.176557 (-0.056223) | 0.186271 / 0.737135 (-0.550865) | 0.121581 / 0.296338 (-0.174758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507782 / 0.215209 (0.292573) | 5.062932 / 2.077655 (2.985278) | 2.689539 / 1.504120 (1.185419) | 2.482978 / 1.541195 (0.941784) | 2.561320 / 1.468490 (1.092830) | 0.570664 / 4.584777 (-4.014113) | 4.346051 / 3.745712 (0.600339) | 6.479374 / 5.269862 (1.209513) | 4.096483 / 4.565676 (-0.469194) | 0.067564 / 0.424275 (-0.356711) | 0.009147 / 0.007607 (0.001540) | 0.596059 / 0.226044 (0.370015) | 5.963223 / 2.268929 (3.694295) | 3.201039 / 55.444624 (-52.243585) | 2.816581 / 6.876477 (-4.059896) | 3.047821 / 2.142072 (0.905748) | 0.687749 / 4.805227 (-4.117478) | 0.158174 / 6.500664 (-6.342490) | 0.073329 / 0.075469 (-0.002140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.601346 / 1.841788 (-0.240441) | 23.712210 / 8.074308 (15.637902) | 16.567272 / 10.191392 (6.375880) | 0.224745 / 0.680424 (-0.455679) | 0.021662 / 0.534201 (-0.512539) | 0.471427 / 0.579283 (-0.107856) | 0.498751 / 0.434364 (0.064387) | 0.572047 / 0.540337 (0.031710) | 0.821868 / 1.386936 (-0.565068) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#34d0c9027c750adc89f3d04a6bf2e9cb95915da4 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006371 / 0.011353 (-0.004981) | 0.003749 / 0.011008 (-0.007259) | 0.084155 / 0.038508 (0.045647) | 0.072450 / 0.023109 (0.049340) | 0.308002 / 0.275898 (0.032104) | 0.340471 / 0.323480 (0.016991) | 0.005054 / 0.007986 (-0.002931) | 0.003176 / 0.004328 (-0.001152) | 0.064867 / 0.004250 (0.060616) | 0.054305 / 0.037052 (0.017252) | 0.321047 / 0.258489 (0.062558) | 0.345999 / 0.293841 (0.052158) | 0.030507 / 0.128546 (-0.098039) | 0.008299 / 0.075646 (-0.067347) | 0.287682 / 0.419271 (-0.131590) | 0.052048 / 0.043533 (0.008515) | 0.308322 / 0.255139 (0.053183) | 0.333220 / 0.283200 (0.050020) | 0.022698 / 0.141683 (-0.118985) | 1.474033 / 1.452155 (0.021879) | 1.544790 / 1.492716 (0.052074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200612 / 0.018006 (0.182606) | 0.450934 / 0.000490 (0.450445) | 0.005383 / 0.000200 (0.005183) | 0.000200 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027759 / 0.037411 (-0.009652) | 0.080935 / 0.014526 (0.066409) | 0.093041 / 0.176557 (-0.083516) | 0.148643 / 0.737135 (-0.588492) | 0.093463 / 0.296338 (-0.202876) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381653 / 0.215209 (0.166444) | 3.810699 / 2.077655 (1.733044) | 1.866858 / 1.504120 (0.362738) | 1.716985 / 1.541195 (0.175790) | 1.788071 / 1.468490 (0.319581) | 0.481130 / 4.584777 (-4.103647) | 3.529798 / 3.745712 (-0.215914) | 3.982037 / 5.269862 (-1.287824) | 2.324866 / 4.565676 (-2.240811) | 0.056767 / 0.424275 (-0.367508) | 0.007306 / 0.007607 (-0.000301) | 0.459472 / 0.226044 (0.233428) | 4.602808 / 2.268929 (2.333879) | 2.332014 / 55.444624 (-53.112610) | 2.044858 / 6.876477 (-4.831619) | 2.204165 / 2.142072 (0.062093) | 0.577946 / 4.805227 (-4.227281) | 0.130900 / 6.500664 (-6.369764) | 0.059054 / 0.075469 (-0.016415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245211 / 1.841788 (-0.596576) | 19.176397 / 8.074308 (11.102089) | 13.995280 / 10.191392 (3.803888) | 0.171743 / 0.680424 (-0.508681) | 0.018038 / 0.534201 (-0.516163) | 0.392338 / 0.579283 (-0.186945) | 0.419370 / 0.434364 (-0.014994) | 0.477829 / 0.540337 (-0.062508) | 0.677409 / 1.386936 (-0.709527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006513 / 0.011353 (-0.004840) | 0.003984 / 0.011008 (-0.007024) | 0.064516 / 0.038508 (0.026008) | 0.070504 / 0.023109 (0.047395) | 0.384509 / 0.275898 (0.108611) | 0.410564 / 0.323480 (0.087084) | 0.005310 / 0.007986 (-0.002675) | 0.003268 / 0.004328 (-0.001061) | 0.064684 / 0.004250 (0.060433) | 0.055367 / 0.037052 (0.018315) | 0.399108 / 0.258489 (0.140619) | 0.422740 / 0.293841 (0.128900) | 0.031624 / 0.128546 (-0.096922) | 0.008617 / 0.075646 (-0.067030) | 0.070929 / 0.419271 (-0.348342) | 0.049146 / 0.043533 (0.005613) | 0.385492 / 0.255139 (0.130353) | 0.407434 / 0.283200 (0.124234) | 0.021972 / 0.141683 (-0.119711) | 1.496135 / 1.452155 (0.043980) | 1.533739 / 1.492716 (0.041023) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226218 / 0.018006 (0.208211) | 0.443176 / 0.000490 (0.442686) | 0.000376 / 0.000200 (0.000176) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030315 / 0.037411 (-0.007097) | 0.086416 / 0.014526 (0.071890) | 0.097725 / 0.176557 (-0.078831) | 0.150407 / 0.737135 (-0.586728) | 0.099914 / 0.296338 (-0.196424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409807 / 0.215209 (0.194598) | 4.099086 / 2.077655 (2.021431) | 2.103160 / 1.504120 (0.599040) | 1.927927 / 1.541195 (0.386733) | 1.977751 / 1.468490 (0.509261) | 0.476995 / 4.584777 (-4.107781) | 3.521835 / 3.745712 (-0.223877) | 3.237695 / 5.269862 (-2.032167) | 1.995953 / 4.565676 (-2.569724) | 0.056208 / 0.424275 (-0.368068) | 0.007660 / 0.007607 (0.000053) | 0.483537 / 0.226044 (0.257492) | 4.833974 / 2.268929 (2.565046) | 2.589115 / 55.444624 (-52.855510) | 2.228076 / 6.876477 (-4.648401) | 2.395271 / 2.142072 (0.253198) | 0.577534 / 4.805227 (-4.227694) | 0.131432 / 6.500664 (-6.369232) | 0.060999 / 0.075469 (-0.014471) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356043 / 1.841788 (-0.485745) | 19.470401 / 8.074308 (11.396093) | 14.091266 / 10.191392 (3.899874) | 0.166809 / 0.680424 (-0.513615) | 0.018782 / 0.534201 (-0.515419) | 0.394916 / 0.579283 (-0.184367) | 0.411378 / 0.434364 (-0.022986) | 0.466886 / 0.540337 (-0.073451) | 0.617369 / 1.386936 (-0.769567) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#601ae6c7baff33a600fd10b12940966024fd2221 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007590 / 0.011353 (-0.003762) | 0.004068 / 0.011008 (-0.006941) | 0.105479 / 0.038508 (0.066971) | 0.085614 / 0.023109 (0.062505) | 0.384325 / 0.275898 (0.108427) | 0.467867 / 0.323480 (0.144387) | 0.004652 / 0.007986 (-0.003333) | 0.005445 / 0.004328 (0.001117) | 0.079604 / 0.004250 (0.075353) | 0.066031 / 0.037052 (0.028978) | 0.426184 / 0.258489 (0.167695) | 0.480712 / 0.293841 (0.186871) | 0.037837 / 0.128546 (-0.090709) | 0.009765 / 0.075646 (-0.065882) | 0.351316 / 0.419271 (-0.067955) | 0.063634 / 0.043533 (0.020101) | 0.420297 / 0.255139 (0.165158) | 0.449169 / 0.283200 (0.165969) | 0.030947 / 0.141683 (-0.110736) | 1.840184 / 1.452155 (0.388029) | 1.934074 / 1.492716 (0.441357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223483 / 0.018006 (0.205477) | 0.521086 / 0.000490 (0.520596) | 0.000379 / 0.000200 (0.000179) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032011 / 0.037411 (-0.005400) | 0.101474 / 0.014526 (0.086948) | 0.108652 / 0.176557 (-0.067904) | 0.173340 / 0.737135 (-0.563796) | 0.114186 / 0.296338 (-0.182153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478020 / 0.215209 (0.262811) | 4.645400 / 2.077655 (2.567746) | 2.590763 / 1.504120 (1.086643) | 2.383002 / 1.541195 (0.841807) | 2.482550 / 1.468490 (1.014060) | 0.572417 / 4.584777 (-4.012360) | 4.233436 / 3.745712 (0.487724) | 4.858823 / 5.269862 (-0.411038) | 2.838913 / 4.565676 (-1.726764) | 0.070010 / 0.424275 (-0.354265) | 0.009602 / 0.007607 (0.001995) | 0.538735 / 0.226044 (0.312691) | 5.534340 / 2.268929 (3.265411) | 2.915006 / 55.444624 (-52.529619) | 2.625132 / 6.876477 (-4.251345) | 2.537838 / 2.142072 (0.395766) | 0.667870 / 4.805227 (-4.137357) | 0.146330 / 6.500664 (-6.354334) | 0.071631 / 0.075469 (-0.003838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594686 / 1.841788 (-0.247101) | 22.311113 / 8.074308 (14.236804) | 17.603983 / 10.191392 (7.412591) | 0.195995 / 0.680424 (-0.484428) | 0.022254 / 0.534201 (-0.511947) | 0.479661 / 0.579283 (-0.099622) | 0.463626 / 0.434364 (0.029262) | 0.483465 / 0.540337 (-0.056873) | 0.676141 / 1.386936 (-0.710795) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006146 / 0.011353 (-0.005207) | 0.004856 / 0.011008 (-0.006152) | 0.067506 / 0.038508 (0.028998) | 0.073968 / 0.023109 (0.050859) | 0.470013 / 0.275898 (0.194115) | 0.479022 / 0.323480 (0.155542) | 0.005972 / 0.007986 (-0.002014) | 0.003846 / 0.004328 (-0.000483) | 0.075141 / 0.004250 (0.070890) | 0.058597 / 0.037052 (0.021544) | 0.481454 / 0.258489 (0.222965) | 0.515634 / 0.293841 (0.221793) | 0.034979 / 0.128546 (-0.093567) | 0.010385 / 0.075646 (-0.065261) | 0.072649 / 0.419271 (-0.346622) | 0.058183 / 0.043533 (0.014650) | 0.462138 / 0.255139 (0.206999) | 0.476093 / 0.283200 (0.192893) | 0.032918 / 0.141683 (-0.108765) | 1.820530 / 1.452155 (0.368375) | 1.626360 / 1.492716 (0.133644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208970 / 0.018006 (0.190964) | 0.492478 / 0.000490 (0.491988) | 0.005487 / 0.000200 (0.005287) | 0.000140 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037896 / 0.037411 (0.000484) | 0.089752 / 0.014526 (0.075227) | 0.107445 / 0.176557 (-0.069111) | 0.181260 / 0.737135 (-0.555876) | 0.105700 / 0.296338 (-0.190639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495031 / 0.215209 (0.279821) | 4.806939 / 2.077655 (2.729284) | 2.227928 / 1.504120 (0.723808) | 2.067117 / 1.541195 (0.525922) | 2.348982 / 1.468490 (0.880492) | 0.567201 / 4.584777 (-4.017576) | 4.166592 / 3.745712 (0.420880) | 3.654329 / 5.269862 (-1.615533) | 2.331092 / 4.565676 (-2.234584) | 0.062212 / 0.424275 (-0.362063) | 0.008775 / 0.007607 (0.001168) | 0.515413 / 0.226044 (0.289369) | 5.449300 / 2.268929 (3.180371) | 3.206574 / 55.444624 (-52.238050) | 2.600455 / 6.876477 (-4.276022) | 3.041162 / 2.142072 (0.899089) | 0.681899 / 4.805227 (-4.123328) | 0.155400 / 6.500664 (-6.345265) | 0.073933 / 0.075469 (-0.001537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.572329 / 1.841788 (-0.269459) | 23.638519 / 8.074308 (15.564211) | 17.145663 / 10.191392 (6.954271) | 0.232690 / 0.680424 (-0.447734) | 0.028620 / 0.534201 (-0.505581) | 0.488105 / 0.579283 (-0.091178) | 0.490365 / 0.434364 (0.056001) | 0.599501 / 0.540337 (0.059164) | 0.708101 / 1.386936 (-0.678835) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4a761315900880a25b347ad19b78bd567cfce1f0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005947 / 0.011353 (-0.005406) | 0.003577 / 0.011008 (-0.007431) | 0.081631 / 0.038508 (0.043122) | 0.058651 / 0.023109 (0.035541) | 0.342742 / 0.275898 (0.066843) | 0.384130 / 0.323480 (0.060650) | 0.004620 / 0.007986 (-0.003366) | 0.002885 / 0.004328 (-0.001444) | 0.063698 / 0.004250 (0.059448) | 0.048953 / 0.037052 (0.011901) | 0.367880 / 0.258489 (0.109391) | 0.407050 / 0.293841 (0.113209) | 0.027242 / 0.128546 (-0.101305) | 0.007914 / 0.075646 (-0.067733) | 0.262156 / 0.419271 (-0.157116) | 0.044750 / 0.043533 (0.001218) | 0.351613 / 0.255139 (0.096474) | 0.380284 / 0.283200 (0.097084) | 0.020080 / 0.141683 (-0.121603) | 1.498101 / 1.452155 (0.045946) | 1.543608 / 1.492716 (0.050892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180014 / 0.018006 (0.162008) | 0.436172 / 0.000490 (0.435682) | 0.003694 / 0.000200 (0.003494) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024389 / 0.037411 (-0.013022) | 0.072874 / 0.014526 (0.058348) | 0.083469 / 0.176557 (-0.093088) | 0.144600 / 0.737135 (-0.592536) | 0.084229 / 0.296338 (-0.212110) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391636 / 0.215209 (0.176427) | 3.906941 / 2.077655 (1.829286) | 1.901944 / 1.504120 (0.397825) | 1.762702 / 1.541195 (0.221507) | 1.817970 / 1.468490 (0.349480) | 0.500345 / 4.584777 (-4.084432) | 3.011351 / 3.745712 (-0.734361) | 4.417763 / 5.269862 (-0.852098) | 2.689744 / 4.565676 (-1.875933) | 0.057765 / 0.424275 (-0.366511) | 0.006412 / 0.007607 (-0.001195) | 0.468156 / 0.226044 (0.242112) | 4.664975 / 2.268929 (2.396047) | 2.323355 / 55.444624 (-53.121270) | 1.984280 / 6.876477 (-4.892197) | 2.165215 / 2.142072 (0.023142) | 0.586950 / 4.805227 (-4.218278) | 0.124363 / 6.500664 (-6.376301) | 0.060702 / 0.075469 (-0.014767) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238870 / 1.841788 (-0.602917) | 18.587360 / 8.074308 (10.513052) | 13.831674 / 10.191392 (3.640282) | 0.143542 / 0.680424 (-0.536882) | 0.016913 / 0.534201 (-0.517288) | 0.332314 / 0.579283 (-0.246969) | 0.345419 / 0.434364 (-0.088945) | 0.381257 / 0.540337 (-0.159081) | 0.537844 / 1.386936 (-0.849092) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006294 / 0.011353 (-0.005059) | 0.003714 / 0.011008 (-0.007294) | 0.062684 / 0.038508 (0.024176) | 0.063520 / 0.023109 (0.040411) | 0.389591 / 0.275898 (0.113693) | 0.444278 / 0.323480 (0.120798) | 0.004825 / 0.007986 (-0.003160) | 0.003010 / 0.004328 (-0.001318) | 0.062767 / 0.004250 (0.058517) | 0.051739 / 0.037052 (0.014686) | 0.434299 / 0.258489 (0.175810) | 0.452003 / 0.293841 (0.158162) | 0.027375 / 0.128546 (-0.101171) | 0.008135 / 0.075646 (-0.067511) | 0.067401 / 0.419271 (-0.351871) | 0.042752 / 0.043533 (-0.000780) | 0.367633 / 0.255139 (0.112494) | 0.433039 / 0.283200 (0.149840) | 0.021086 / 0.141683 (-0.120597) | 1.488024 / 1.452155 (0.035870) | 1.507767 / 1.492716 (0.015050) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230046 / 0.018006 (0.212040) | 0.428085 / 0.000490 (0.427595) | 0.002188 / 0.000200 (0.001988) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026705 / 0.037411 (-0.010706) | 0.082466 / 0.014526 (0.067940) | 0.089378 / 0.176557 (-0.087179) | 0.147287 / 0.737135 (-0.589849) | 0.090426 / 0.296338 (-0.205913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430882 / 0.215209 (0.215672) | 4.296224 / 2.077655 (2.218569) | 2.229982 / 1.504120 (0.725862) | 2.048506 / 1.541195 (0.507311) | 2.129514 / 1.468490 (0.661024) | 0.502964 / 4.584777 (-4.081813) | 3.048125 / 3.745712 (-0.697587) | 4.208636 / 5.269862 (-1.061226) | 2.594015 / 4.565676 (-1.971661) | 0.057967 / 0.424275 (-0.366308) | 0.006875 / 0.007607 (-0.000732) | 0.513872 / 0.226044 (0.287828) | 5.126435 / 2.268929 (2.857506) | 2.691278 / 55.444624 (-52.753346) | 2.361723 / 6.876477 (-4.514754) | 2.511213 / 2.142072 (0.369141) | 0.593558 / 4.805227 (-4.211670) | 0.129332 / 6.500664 (-6.371332) | 0.064051 / 0.075469 (-0.011418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289049 / 1.841788 (-0.552739) | 18.912363 / 8.074308 (10.838055) | 14.226500 / 10.191392 (4.035108) | 0.131392 / 0.680424 (-0.549032) | 0.016750 / 0.534201 (-0.517451) | 0.330078 / 0.579283 (-0.249205) | 0.347588 / 0.434364 (-0.086776) | 0.383234 / 0.540337 (-0.157103) | 0.510967 / 1.386936 (-0.875969) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d7892beb30bab0633b84398c5ea43d7e69fe38cc \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005379) | 0.003691 / 0.011008 (-0.007317) | 0.079410 / 0.038508 (0.040902) | 0.061769 / 0.023109 (0.038660) | 0.323310 / 0.275898 (0.047412) | 0.354325 / 0.323480 (0.030845) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001430) | 0.062104 / 0.004250 (0.057854) | 0.048973 / 0.037052 (0.011921) | 0.326497 / 0.258489 (0.068008) | 0.361347 / 0.293841 (0.067506) | 0.026741 / 0.128546 (-0.101805) | 0.007936 / 0.075646 (-0.067710) | 0.259168 / 0.419271 (-0.160104) | 0.044859 / 0.043533 (0.001327) | 0.319342 / 0.255139 (0.064203) | 0.343711 / 0.283200 (0.060511) | 0.022298 / 0.141683 (-0.119384) | 1.451595 / 1.452155 (-0.000560) | 1.573730 / 1.492716 (0.081014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.173086 / 0.018006 (0.155080) | 0.432400 / 0.000490 (0.431910) | 0.003739 / 0.000200 (0.003539) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024477 / 0.037411 (-0.012934) | 0.073463 / 0.014526 (0.058937) | 0.083410 / 0.176557 (-0.093146) | 0.144760 / 0.737135 (-0.592376) | 0.084199 / 0.296338 (-0.212140) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388251 / 0.215209 (0.173042) | 3.875375 / 2.077655 (1.797720) | 1.875515 / 1.504120 (0.371395) | 1.729282 / 1.541195 (0.188087) | 1.784732 / 1.468490 (0.316242) | 0.496985 / 4.584777 (-4.087792) | 3.030276 / 3.745712 (-0.715436) | 2.813192 / 5.269862 (-2.456669) | 1.868647 / 4.565676 (-2.697030) | 0.057376 / 0.424275 (-0.366899) | 0.006463 / 0.007607 (-0.001144) | 0.462153 / 0.226044 (0.236108) | 4.586583 / 2.268929 (2.317654) | 2.287730 / 55.444624 (-53.156894) | 1.972177 / 6.876477 (-4.904299) | 2.151592 / 2.142072 (0.009520) | 0.587169 / 4.805227 (-4.218058) | 0.127063 / 6.500664 (-6.373601) | 0.060297 / 0.075469 (-0.015172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267651 / 1.841788 (-0.574136) | 18.426011 / 8.074308 (10.351703) | 14.050470 / 10.191392 (3.859078) | 0.148063 / 0.680424 (-0.532361) | 0.017112 / 0.534201 (-0.517089) | 0.330051 / 0.579283 (-0.249232) | 0.358730 / 0.434364 (-0.075634) | 0.392365 / 0.540337 (-0.147972) | 0.534650 / 1.386936 (-0.852286) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005936 / 0.011353 (-0.005417) | 0.003652 / 0.011008 (-0.007356) | 0.063066 / 0.038508 (0.024558) | 0.060617 / 0.023109 (0.037507) | 0.388293 / 0.275898 (0.112395) | 0.411422 / 0.323480 (0.087942) | 0.004691 / 0.007986 (-0.003295) | 0.002857 / 0.004328 (-0.001472) | 0.064198 / 0.004250 (0.059947) | 0.049124 / 0.037052 (0.012071) | 0.403601 / 0.258489 (0.145112) | 0.413619 / 0.293841 (0.119778) | 0.027279 / 0.128546 (-0.101267) | 0.008072 / 0.075646 (-0.067575) | 0.067890 / 0.419271 (-0.351381) | 0.041866 / 0.043533 (-0.001667) | 0.393438 / 0.255139 (0.138299) | 0.402865 / 0.283200 (0.119666) | 0.023381 / 0.141683 (-0.118302) | 1.496324 / 1.452155 (0.044170) | 1.538080 / 1.492716 (0.045364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212065 / 0.018006 (0.194059) | 0.410511 / 0.000490 (0.410021) | 0.001236 / 0.000200 (0.001036) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026012 / 0.037411 (-0.011399) | 0.076592 / 0.014526 (0.062066) | 0.085963 / 0.176557 (-0.090594) | 0.137803 / 0.737135 (-0.599332) | 0.087594 / 0.296338 (-0.208745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434283 / 0.215209 (0.219074) | 4.345478 / 2.077655 (2.267824) | 2.400954 / 1.504120 (0.896834) | 2.282024 / 1.541195 (0.740829) | 2.414247 / 1.468490 (0.945757) | 0.501855 / 4.584777 (-4.082922) | 3.059433 / 3.745712 (-0.686279) | 2.811288 / 5.269862 (-2.458574) | 1.856839 / 4.565676 (-2.708838) | 0.058017 / 0.424275 (-0.366258) | 0.006844 / 0.007607 (-0.000763) | 0.515376 / 0.226044 (0.289332) | 5.148775 / 2.268929 (2.879847) | 2.930807 / 55.444624 (-52.513817) | 2.520532 / 6.876477 (-4.355944) | 2.746299 / 2.142072 (0.604227) | 0.590102 / 4.805227 (-4.215125) | 0.125747 / 6.500664 (-6.374917) | 0.061873 / 0.075469 (-0.013597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306247 / 1.841788 (-0.535541) | 18.366048 / 8.074308 (10.291740) | 13.855617 / 10.191392 (3.664225) | 0.150124 / 0.680424 (-0.530300) | 0.017189 / 0.534201 (-0.517012) | 0.336285 / 0.579283 (-0.242998) | 0.344985 / 0.434364 (-0.089379) | 0.397973 / 0.540337 (-0.142364) | 0.536142 / 1.386936 (-0.850794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ae24cf12054b4a512f198979b1ca7707bb99d56 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006401 / 0.011353 (-0.004952) | 0.003789 / 0.011008 (-0.007219) | 0.079516 / 0.038508 (0.041008) | 0.068279 / 0.023109 (0.045170) | 0.295691 / 0.275898 (0.019793) | 0.327208 / 0.323480 (0.003728) | 0.005070 / 0.007986 (-0.002915) | 0.003044 / 0.004328 (-0.001285) | 0.061411 / 0.004250 (0.057161) | 0.053227 / 0.037052 (0.016175) | 0.297368 / 0.258489 (0.038879) | 0.334740 / 0.293841 (0.040899) | 0.029459 / 0.128546 (-0.099087) | 0.008080 / 0.075646 (-0.067566) | 0.267344 / 0.419271 (-0.151927) | 0.049877 / 0.043533 (0.006344) | 0.293853 / 0.255139 (0.038714) | 0.319819 / 0.283200 (0.036620) | 0.022593 / 0.141683 (-0.119089) | 1.459054 / 1.452155 (0.006900) | 1.471250 / 1.492716 (-0.021466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194326 / 0.018006 (0.176320) | 0.443565 / 0.000490 (0.443075) | 0.003745 / 0.000200 (0.003545) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026640 / 0.037411 (-0.010772) | 0.077630 / 0.014526 (0.063104) | 0.089364 / 0.176557 (-0.087192) | 0.147327 / 0.737135 (-0.589809) | 0.089603 / 0.296338 (-0.206735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.373758 / 0.215209 (0.158549) | 3.746778 / 2.077655 (1.669123) | 1.814991 / 1.504120 (0.310871) | 1.645650 / 1.541195 (0.104455) | 1.690752 / 1.468490 (0.222262) | 0.472117 / 4.584777 (-4.112660) | 3.457346 / 3.745712 (-0.288367) | 3.138869 / 5.269862 (-2.130993) | 1.934924 / 4.565676 (-2.630753) | 0.055709 / 0.424275 (-0.368566) | 0.006680 / 0.007607 (-0.000927) | 0.446874 / 0.226044 (0.220829) | 4.458409 / 2.268929 (2.189480) | 2.253932 / 55.444624 (-53.190693) | 2.007240 / 6.876477 (-4.869237) | 2.081687 / 2.142072 (-0.060386) | 0.563379 / 4.805227 (-4.241848) | 0.128694 / 6.500664 (-6.371970) | 0.057409 / 0.075469 (-0.018060) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212231 / 1.841788 (-0.629556) | 18.519121 / 8.074308 (10.444813) | 13.582243 / 10.191392 (3.390851) | 0.142488 / 0.680424 (-0.537936) | 0.017421 / 0.534201 (-0.516780) | 0.366864 / 0.579283 (-0.212419) | 0.401467 / 0.434364 (-0.032897) | 0.443659 / 0.540337 (-0.096679) | 0.618854 / 1.386936 (-0.768082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003690 / 0.011008 (-0.007318) | 0.060340 / 0.038508 (0.021832) | 0.067215 / 0.023109 (0.044106) | 0.382846 / 0.275898 (0.106948) | 0.415774 / 0.323480 (0.092294) | 0.004868 / 0.007986 (-0.003118) | 0.003108 / 0.004328 (-0.001221) | 0.060572 / 0.004250 (0.056321) | 0.050453 / 0.037052 (0.013401) | 0.400494 / 0.258489 (0.142005) | 0.424368 / 0.293841 (0.130527) | 0.030279 / 0.128546 (-0.098267) | 0.008151 / 0.075646 (-0.067495) | 0.066707 / 0.419271 (-0.352564) | 0.046118 / 0.043533 (0.002585) | 0.386697 / 0.255139 (0.131558) | 0.410156 / 0.283200 (0.126957) | 0.020688 / 0.141683 (-0.120995) | 1.418162 / 1.452155 (-0.033993) | 1.463057 / 1.492716 (-0.029659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216081 / 0.018006 (0.198075) | 0.440541 / 0.000490 (0.440051) | 0.000371 / 0.000200 (0.000171) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027763 / 0.037411 (-0.009648) | 0.082316 / 0.014526 (0.067791) | 0.094086 / 0.176557 (-0.082471) | 0.144738 / 0.737135 (-0.592398) | 0.094837 / 0.296338 (-0.201501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396277 / 0.215209 (0.181068) | 3.958791 / 2.077655 (1.881136) | 2.021367 / 1.504120 (0.517247) | 1.860112 / 1.541195 (0.318917) | 1.886032 / 1.468490 (0.417541) | 0.468536 / 4.584777 (-4.116241) | 3.417950 / 3.745712 (-0.327762) | 4.849991 / 5.269862 (-0.419871) | 2.773935 / 4.565676 (-1.791742) | 0.055813 / 0.424275 (-0.368462) | 0.007053 / 0.007607 (-0.000554) | 0.470167 / 0.226044 (0.244122) | 4.702969 / 2.268929 (2.434041) | 2.474161 / 55.444624 (-52.970464) | 2.171256 / 6.876477 (-4.705220) | 2.315373 / 2.142072 (0.173301) | 0.589195 / 4.805227 (-4.216032) | 0.128237 / 6.500664 (-6.372427) | 0.058641 / 0.075469 (-0.016828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292947 / 1.841788 (-0.548841) | 18.851300 / 8.074308 (10.776992) | 14.089764 / 10.191392 (3.898372) | 0.164853 / 0.680424 (-0.515571) | 0.017281 / 0.534201 (-0.516920) | 0.359112 / 0.579283 (-0.220171) | 0.386696 / 0.434364 (-0.047668) | 0.428222 / 0.540337 (-0.112115) | 0.568659 / 1.386936 (-0.818277) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#563864ded894b468e2ba3f677ef79c5ab3fe65df \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006051 / 0.011353 (-0.005301) | 0.003654 / 0.011008 (-0.007355) | 0.080081 / 0.038508 (0.041572) | 0.062925 / 0.023109 (0.039815) | 0.358097 / 0.275898 (0.082199) | 0.405728 / 0.323480 (0.082248) | 0.005359 / 0.007986 (-0.002627) | 0.002820 / 0.004328 (-0.001508) | 0.063108 / 0.004250 (0.058858) | 0.049627 / 0.037052 (0.012575) | 0.397870 / 0.258489 (0.139381) | 0.437157 / 0.293841 (0.143316) | 0.027707 / 0.128546 (-0.100839) | 0.007911 / 0.075646 (-0.067735) | 0.260991 / 0.419271 (-0.158280) | 0.044771 / 0.043533 (0.001238) | 0.340230 / 0.255139 (0.085091) | 0.384925 / 0.283200 (0.101725) | 0.021369 / 0.141683 (-0.120314) | 1.431439 / 1.452155 (-0.020715) | 1.478794 / 1.492716 (-0.013922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182626 / 0.018006 (0.164620) | 0.435551 / 0.000490 (0.435061) | 0.003015 / 0.000200 (0.002815) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024703 / 0.037411 (-0.012708) | 0.073640 / 0.014526 (0.059114) | 0.084598 / 0.176557 (-0.091959) | 0.145810 / 0.737135 (-0.591325) | 0.085125 / 0.296338 (-0.211213) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394539 / 0.215209 (0.179330) | 3.945882 / 2.077655 (1.868227) | 1.947166 / 1.504120 (0.443046) | 1.763305 / 1.541195 (0.222111) | 1.816208 / 1.468490 (0.347718) | 0.498880 / 4.584777 (-4.085897) | 3.098283 / 3.745712 (-0.647429) | 2.823474 / 5.269862 (-2.446388) | 1.873993 / 4.565676 (-2.691684) | 0.058097 / 0.424275 (-0.366179) | 0.006488 / 0.007607 (-0.001119) | 0.466711 / 0.226044 (0.240667) | 4.671520 / 2.268929 (2.402592) | 2.363381 / 55.444624 (-53.081243) | 2.052092 / 6.876477 (-4.824385) | 2.209212 / 2.142072 (0.067140) | 0.594650 / 4.805227 (-4.210577) | 0.125604 / 6.500664 (-6.375060) | 0.061511 / 0.075469 (-0.013958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226564 / 1.841788 (-0.615224) | 18.583605 / 8.074308 (10.509297) | 13.993091 / 10.191392 (3.801699) | 0.146185 / 0.680424 (-0.534239) | 0.016839 / 0.534201 (-0.517362) | 0.334116 / 0.579283 (-0.245167) | 0.360780 / 0.434364 (-0.073584) | 0.386008 / 0.540337 (-0.154329) | 0.643278 / 1.386936 (-0.743658) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.003658 / 0.011008 (-0.007350) | 0.063250 / 0.038508 (0.024742) | 0.063542 / 0.023109 (0.040433) | 0.366845 / 0.275898 (0.090947) | 0.409794 / 0.323480 (0.086314) | 0.005678 / 0.007986 (-0.002308) | 0.003061 / 0.004328 (-0.001268) | 0.063561 / 0.004250 (0.059311) | 0.052648 / 0.037052 (0.015596) | 0.378096 / 0.258489 (0.119607) | 0.410706 / 0.293841 (0.116865) | 0.027668 / 0.128546 (-0.100878) | 0.008045 / 0.075646 (-0.067601) | 0.068290 / 0.419271 (-0.350981) | 0.042602 / 0.043533 (-0.000930) | 0.364976 / 0.255139 (0.109837) | 0.395599 / 0.283200 (0.112400) | 0.022733 / 0.141683 (-0.118950) | 1.522473 / 1.452155 (0.070319) | 1.515891 / 1.492716 (0.023175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232554 / 0.018006 (0.214547) | 0.420702 / 0.000490 (0.420213) | 0.002161 / 0.000200 (0.001961) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026276 / 0.037411 (-0.011135) | 0.078504 / 0.014526 (0.063978) | 0.088989 / 0.176557 (-0.087567) | 0.144044 / 0.737135 (-0.593091) | 0.091074 / 0.296338 (-0.205265) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420189 / 0.215209 (0.204980) | 4.189596 / 2.077655 (2.111941) | 2.316425 / 1.504120 (0.812305) | 2.186877 / 1.541195 (0.645682) | 2.259065 / 1.468490 (0.790575) | 0.502827 / 4.584777 (-4.081950) | 3.135266 / 3.745712 (-0.610446) | 2.838808 / 5.269862 (-2.431053) | 1.876519 / 4.565676 (-2.689158) | 0.057802 / 0.424275 (-0.366473) | 0.006824 / 0.007607 (-0.000784) | 0.500213 / 0.226044 (0.274168) | 4.999798 / 2.268929 (2.730869) | 2.627713 / 55.444624 (-52.816911) | 2.344263 / 6.876477 (-4.532214) | 2.415449 / 2.142072 (0.273376) | 0.593082 / 4.805227 (-4.212145) | 0.125787 / 6.500664 (-6.374877) | 0.062699 / 0.075469 (-0.012770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308219 / 1.841788 (-0.533569) | 18.703099 / 8.074308 (10.628791) | 13.976234 / 10.191392 (3.784842) | 0.144037 / 0.680424 (-0.536387) | 0.016592 / 0.534201 (-0.517609) | 0.333078 / 0.579283 (-0.246206) | 0.342317 / 0.434364 (-0.092047) | 0.396837 / 0.540337 (-0.143500) | 0.532641 / 1.386936 (-0.854295) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#14f6edd9222e577dccb962ed5338b79b73502fa5 \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4551/comments | https://api.github.com/repos/huggingface/datasets/issues/4551/events | https://github.com/huggingface/datasets/pull/4551 | 1,282,534,807 | PR_kwDODunzps46QAV- | 4,551 | Perform hidden file check on relative data file path | [] | closed | false | null | 5 | 2022-06-23T14:49:11Z | 2022-06-30T14:49:20Z | 2022-06-30T14:38:18Z | null | Fix #4549 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4551/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4551/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4551",
"merged_at": "2022-06-30T14:38:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4551"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https://github.com/huggingface/datasets/issues/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they seem to work (will push them on Monday; btw they don't break any of fsspec's tests, so maybe we can contribute this as an enhancement to them). Also, perhaps we should include the files starting with `__` in the results again (we hadn't had issues with this pattern before). WDYT?",
"I see. Feel free to merge this one if it's good for you btw :)\r\n\r\n> Also, perhaps we should include the files starting with __ in the results again (we hadn't had issues with this pattern before)\r\n\r\nThe point was mainly to ignore `__pycache__` directories for example. Also also for consistency with the iter_files/iter_archive which are already ignoring them",
"Very elegant solution! Feel free to merge if the CI is green after adding the tests.",
"CI failure is unrelated to this PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/1269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1269/comments | https://api.github.com/repos/huggingface/datasets/issues/1269/events | https://github.com/huggingface/datasets/pull/1269 | 758,886,174 | MDExOlB1bGxSZXF1ZXN0NTMzOTc3MTE2 | 1,269 | Adding OneStopEnglish corpus dataset | [] | closed | false | null | 1 | 2020-12-07T22:05:11Z | 2020-12-09T18:43:38Z | 2020-12-09T15:33:53Z | null | This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification.
Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1269/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1269.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1269",
"merged_at": "2020-12-09T15:33:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1269.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1269"
} | true | [
"Hi @lhoestq, thanks for the review.\r\nI have made all the changes, PTAL! :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/3186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3186/comments | https://api.github.com/repos/huggingface/datasets/issues/3186/events | https://github.com/huggingface/datasets/issues/3186 | 1,040,369,397 | I_kwDODunzps4-Asb1 | 3,186 | Dataset viewer for nli_tr | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 6 | 2021-10-31T03:56:33Z | 2022-09-12T09:15:34Z | 2022-09-12T08:43:09Z | null | ## Dataset viewer issue for '*nli_tr*'
**Link:** https://huggingface.co/datasets/nli_tr
Hello,
Thank you for the new dataset preview feature that will help the users to view the datasets online.
We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be due to a temporary problem that may have blocked access to the dataset through the dataset viewer. But the dataset is currently accessible through the link in the error message. May we kindly ask if it would be possible to rerun the job so that it can access the dataset for the dataset viewer function?
Thank you.
Emrah
------------------------------------------
Server Error
Status code: 404
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_train.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip
------------------------------------------
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3186/timeline | null | completed | null | null | false | [
"It's an issue with the streaming mode:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/nli_tr/c2ddd0c0a70caddac6a81c2dae5ca7939f00060d517d08f1983927818dba6521/nli_tr.py\", line 155, in _generate_examples\r\n with codecs.open(filepath, encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/codecs.py\", line 905, in open\r\n file = builtins.open(filename, mode, buffering)\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_test.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip'\r\n```\r\n\r\nNote that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. `nli_tr` is above the limit, hence the error.",
"cc @huggingface/datasets ",
"Apparently there is an issue with the data source URLs: Server Not Found\r\n- https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip\r\n\r\nWe are contacting the authors to ask them: \r\n@e-budur you are one of the authors: are you aware of the issue with the URLs of your data ?",
"Reported to their repo:\r\n- https://github.com/boun-tabi/NLI-TR/issues/9",
"The server issue was temporary and is now resolved.",
"Once we have implemented support for streaming, the viewer works: https://huggingface.co/datasets/nli_tr"
] |
https://api.github.com/repos/huggingface/datasets/issues/4565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4565/comments | https://api.github.com/repos/huggingface/datasets/issues/4565/events | https://github.com/huggingface/datasets/issues/4565 | 1,284,141,666 | I_kwDODunzps5MinJi | 4,565 | Add UFSC OCPap dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2022-06-24T20:07:54Z | 2022-07-06T19:03:02Z | 2022-07-06T19:03:02Z | null | ## Adding a Dataset
- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)
- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.
- **Paper:** https://dx.doi.org/10.2139/ssrn.4119212
- **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1
- **Motivation:** real data of pap stained oral cytology samples
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4565/timeline | null | completed | null | null | false | [
"I will add this directly on the hub (same as #4486)—in https://huggingface.co/lapix"
] |
https://api.github.com/repos/huggingface/datasets/issues/5661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5661/comments | https://api.github.com/repos/huggingface/datasets/issues/5661/events | https://github.com/huggingface/datasets/issues/5661 | 1,637,129,445 | I_kwDODunzps5hlJzl | 5,661 | CI is broken: Unnecessary `dict` comprehension | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2023-03-23T09:13:01Z | 2023-03-23T09:37:51Z | 2023-03-23T09:37:51Z | null | CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5661/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4543/comments | https://api.github.com/repos/huggingface/datasets/issues/4543/events | https://github.com/huggingface/datasets/pull/4543 | 1,280,379,781 | PR_kwDODunzps46IiEp | 4,543 | [CI] Fix upstream hub test url | [] | closed | false | null | 2 | 2022-06-22T15:34:27Z | 2022-06-22T16:37:40Z | 2022-06-22T16:27:37Z | null | Some tests were still using moon-stagign instead of hub-ci.
I also updated the token to use one dedicated to `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4543/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4543/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4543.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4543",
"merged_at": "2022-06-22T16:27:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4543.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4543"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Remaining CI failures are unrelated to this fix, merging"
] |
https://api.github.com/repos/huggingface/datasets/issues/858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/858/comments | https://api.github.com/repos/huggingface/datasets/issues/858/events | https://github.com/huggingface/datasets/pull/858 | 743,904,516 | MDExOlB1bGxSZXF1ZXN0NTIxNzE3ODQ4 | 858 | Add SemEval-2010 task 8 | [] | closed | false | null | 1 | 2020-11-16T14:57:57Z | 2020-11-26T17:28:55Z | 2020-11-26T17:28:55Z | null | Hi,
I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.
Cheers,
Joel | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/858/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/858/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/858",
"merged_at": "2020-11-26T17:28:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/858"
} | true | [
"Added dummy data and encoding to open(). Now everything should be fine, hopefully :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3882/comments | https://api.github.com/repos/huggingface/datasets/issues/3882/events | https://github.com/huggingface/datasets/pull/3882 | 1,164,595,388 | PR_kwDODunzps40NKz7 | 3,882 | Image process doc | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-03-10T00:32:10Z | 2022-03-15T15:24:16Z | 2022-03-15T15:24:09Z | null | This PR is a first draft of how to process image data. It adds:
- Load an image dataset with `image` and `path` (adds tip about `decode=False` param to access the path and bytes, thanks to @mariosasko).
- Load an image using the `ImageFolder` builder. I know there is an [example](https://huggingface.co/docs/datasets/master/en/loading#image-folders) of this already, but I also wanted to add it here so users don't miss it. This doc seems important for centralizing all of the image-related things so far. Datasets has grown so quickly 🚀 now that I think maybe splitting up the How-to guides by modality may be better since working with vision/audio data is slightly different from what users have seen up until now. This way we can continue to scale the docs to better accommodate vision/audio things.
- Add a data augmentation with `set_transform`. There is only 1 example here so far, but we can certainly add more.
Todo:
- [x] Couldn't figure out why my augmentation function works with `set_transform` but not `map` 🥲. Working with @mariosasko on this! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3882/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3882.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3882",
"merged_at": "2022-03-15T15:24:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3882.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3882"
} | true | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3882). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/1318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1318/comments | https://api.github.com/repos/huggingface/datasets/issues/1318/events | https://github.com/huggingface/datasets/pull/1318 | 759,565,629 | MDExOlB1bGxSZXF1ZXN0NTM0NTQ5NjE3 | 1,318 | ethos first commit | [] | closed | false | null | 3 | 2020-12-08T15:59:47Z | 2020-12-10T14:45:57Z | 2020-12-10T14:45:57Z | null | Ethos passed all the tests except from this one:
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name>
with this error:
E OSError: Cannot find data file.
E Original error:
E [Errno 2] No such file or directory: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1318/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1318/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1318",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1318"
} | true | [
"> Nice thanks !\r\n> \r\n> I left a few comments\r\n> \r\n> Also it looks like this PR includes changes about other files than the ones for ethos\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\n@lhoestq Should I close this PR? The new one is the: #1453",
"You can create another PR and close this one if you don't mind",
"> You can create another PR and close this one if you don't mind\r\n\r\nPerfect! You should see the #1453 PR for the fixed version! Thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/93 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/93/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/93/comments | https://api.github.com/repos/huggingface/datasets/issues/93/events | https://github.com/huggingface/datasets/pull/93 | 617,522,029 | MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy | 93 | Cleanup notebooks and various fixes | [] | closed | false | null | 0 | 2020-05-13T14:58:58Z | 2020-05-13T15:01:48Z | 2020-05-13T15:01:47Z | null | Fixes on dataset (more flexible) metrics (fix) and general clean ups | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/93/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/93/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/93.diff",
"html_url": "https://github.com/huggingface/datasets/pull/93",
"merged_at": "2020-05-13T15:01:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/93.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/93"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5942/comments | https://api.github.com/repos/huggingface/datasets/issues/5942/events | https://github.com/huggingface/datasets/pull/5942 | 1,752,021,681 | PR_kwDODunzps5Su-V4 | 5,942 | Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py` | [] | open | false | null | 0 | 2023-06-12T06:50:50Z | 2023-06-30T09:15:00Z | null | null | Hi,
Following this <https://discuss.huggingface.co/t/how-to-preprocess-a-wikipedia-dataset-using-dataflowrunner/41991/3>, here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`.
I also took the liberty to add missing setup steps to the `beam.mdx` docs in order to help everyone.
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5942/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5942.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5942",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5942.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5942"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4476/comments | https://api.github.com/repos/huggingface/datasets/issues/4476/events | https://github.com/huggingface/datasets/issues/4476 | 1,267,987,499 | I_kwDODunzps5Lk_Qr | 4,476 | `to_pandas` doesn't take into account format. | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 4 | 2022-06-10T20:25:31Z | 2022-06-15T17:41:41Z | 2022-06-15T17:41:41Z | null | **Is your feature request related to a problem? Please describe.**
I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`.
**Describe the solution you'd like**
```python
from datasets import Dataset
ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})
pandas_df = ds.with_format(columns=['a', 'b']).to_pandas()
# I would expect `pandas_df` to only include a,b as column.
```
**Describe alternatives you've considered**
I could remove all columns that I don't want? But I don't know all of them in advance.
**Additional context**
I can probably make a PR with some pointers.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4476/timeline | null | completed | null | null | false | [
"Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`",
"Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.",
"Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```",
"Ahhhh Thank you!\r\n\r\nclosing then :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3457/comments | https://api.github.com/repos/huggingface/datasets/issues/3457/events | https://github.com/huggingface/datasets/issues/3457 | 1,084,862,121 | I_kwDODunzps5Aqa6p | 3,457 | Add CMU Graphics Lab Motion Capture dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 3 | 2021-12-20T14:34:39Z | 2022-03-16T16:53:09Z | null | null | ## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3457/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3457/timeline | null | null | null | null | false | [
"This dataset has files in ASF/AMC format. [ The skeleton file is the ASF file (Acclaim Skeleton File). The motion file is the AMC file (Acclaim Motion Capture data). ] \r\n\r\nSome questions : \r\n1. How do we go about representing these features using datasets.Features and generate examples ?\r\n2. The dataset download link for ASF/AMC files does not have metadata information, for eg : category and subcategory information. We will need to crawl the website for this information. The authors mention \"Please don't crawl this database for all motions.\" Can we mail the authors for this information ?\r\nThe dataset structure is as follows : \r\n```\r\nsubjects\r\n\t- 01\r\n\t\t- 01_01.amc\r\n\t\t- 01_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 01.asf\r\n\t- 02\r\n\t\t- 02_01.amc\r\n\t\t- 02_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 02.asf\r\n```\r\nThere is no metadata regarding the category, sub-category and motion description.\r\n\r\nNeed your inputs. @mariosasko / @lhoestq \r\nThank you.\r\n",
"Hi @dnaveenr! Thanks for working on this!\r\n\r\n1. We can use the `Sequence(Value(\"string\"))` feature type for the subject's AMC files and `Value(\"string\")` for the subject's ASF file (`Value(\"string\")` represents the file paths) + the types for categories/subcategories and descriptions.\r\n2. We can use this URL to download the motion descriptions: http://mocap.cs.cmu.edu/search.php?subjectnumber=<subject_number>&motion=%%%&maincat=%&subcat=%&subtext=yes where `subject_number` is the number between 1 and 144. And to get categories/subcategories, feel free to contact the authors (they state in the FAQ they are happy to help) and ask them if they can provide the mapping from categories/subcategories to the AMC files to avoid crawling. You can also mention that your goal is to make their dataset more accessible by adding its loading script to the Hub.\r\n\r\nThe AMC files are also available in the tvd, c3d, mpg and avi formats (the links are in the [FAQ](http://mocap.cs.cmu.edu/faqs.php) section), so it would be nice to have one config for each of these additional formats. \r\n\r\nAnd additionally, we can add a `Data Preprocessing` section to the card where we explain how to load/process the files. I can help with that.",
"Hi @mariosasko ,\r\n\r\n1. Thanks for this, so we can add the file paths.\r\n2. Yes, I had already mailed the authors a couple of days back actually, asking for the metadata details[ i.e category, sub-category and motion description] . They are yet to respond though, I will wait for a couple of days and try to follow up with them again. :) Else we can use the workaround solution.\r\n\r\nYes. Supporting all the formats would be helpful. \r\n\r\n> And additionally, we can add a Data Preprocessing section to the card where we explain how to load/process the files. I can help with that.\r\n\r\nOkay. Got it."
] |
https://api.github.com/repos/huggingface/datasets/issues/2313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2313/comments | https://api.github.com/repos/huggingface/datasets/issues/2313/events | https://github.com/huggingface/datasets/pull/2313 | 875,475,367 | MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4 | 2,313 | Remove unused head_hf_s3 function | [] | closed | false | null | 0 | 2021-05-04T13:42:06Z | 2021-05-07T09:31:42Z | 2021-05-07T09:31:42Z | null | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2313/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2313",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2313"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1933/comments | https://api.github.com/repos/huggingface/datasets/issues/1933/events | https://github.com/huggingface/datasets/pull/1933 | 814,335,846 | MDExOlB1bGxSZXF1ZXN0NTc4MzQwMzk3 | 1,933 | Use arrow ipc file format | [] | open | false | null | 0 | 2021-02-23T10:38:24Z | 2022-07-06T15:19:48Z | null | null | According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample:
> We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.
Since it stores more metadata regarding the positions of the examples in the file, it should enable better example retrieval performances. However from the discussion in https://github.com/huggingface/datasets/issues/1803 it looks like it's not the case unfortunately. Maybe in the future this will allow speed gains.
I think it's still a good idea to start using it anyway for these reasons:
- in the future we may have speed gains
- it contains the arrow streaming format data
- it's compatible with the pyarrow Dataset implementation (it allows to load remote dataframes for example) if we want to use it in the future
- it's also the format used by arrow feather if we want to use it in the future
- it's roughly the same size as the streaming format
- it's easy to have backward compatibility with the streaming format
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1933/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1933/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1933.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1933",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1933.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1933"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1906/comments | https://api.github.com/repos/huggingface/datasets/issues/1906/events | https://github.com/huggingface/datasets/issues/1906 | 811,405,274 | MDU6SXNzdWU4MTE0MDUyNzQ= | 1,906 | Feature Request: Support for Pandas `Categorical` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | 3 | 2021-02-18T19:46:05Z | 2021-02-23T14:38:50Z | null | null | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
```
I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`?
e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept:
```
index_type = generate_from_arrow_type(pa_type.index_type)
value_type = generate_from_arrow_type(pa_type.value_type)
```
and then additional code points to modify:
- FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694
- A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719
- I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755
- Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775
I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1906/timeline | null | null | null | null | false | [
"We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corresponds to `pa.int64()` in pyarrow and `dtype('int64')` in pandas (so the label names are lost during conversions).\r\n\r\nWhat do you think ?",
"Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?\r\n\r\nOther thoughts:\r\n\r\n- changing the resulting patype on ClassLabel might be backward-incompatible? I'm not totally sure if users of the `datasets` library tend to directly access the `patype` attribute (I don't think we really do, but we haven't been using it for very long yet).\r\n- would ClassLabel's dtype change to `dict[int64, string]`? It seems like in practice a ClassLabel (when not explicitly specified) would be constructed from the DictionaryType branch of `generate_from_arrow_type`, so it's not totally clear to me that anyone ever actually accesses/uses that dtype?\r\n- I don't quite know how `.int2str` and `.str2int` are used in practice - would those be kept? Perhaps the implementation might actually be substantially smaller if we can just delegate to pyarrow's dict methods?\r\n\r\nAnother idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nIn practice, I don't think this would be backward-incompatible in a way anyone would care about since the current behavior just throws an exception, and this way, we could support *reading* a pandas Categorical into a `Dataset` as a ClassLabel. I *think* from there, while it would require some custom glue it wouldn't be too hard to convert the ClassLabel into a pandas Category if we want to go back - I think this would improve on the current behavior without risking changing the behavior of ClassLabel in a backward-incompat way.\r\n\r\nThoughts? I'm not sure if this is overly cautious. Whichever approach you think is better, I'd be happy to take it on!\r\n",
"I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.\r\n\r\nIn this scope, I really like the idea of checking for the dictionary type:\r\n\r\n> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nThis looks like a great start.\r\n\r\nThen as you said we'd have to add the conversion from classlabel to the correct arrow dictionary type. Arrow is already able to convert from arrow Dictionary to pandas Categorical so it should be enough.\r\n\r\nI can see two things that we must take case of to make this change backward compatible:\r\n- first we must still be able to load an arrow file with arrow int64 dtype and `datasets` ClassLabel type without crashing. This can be fixed by casting the arrow int64 array to an arrow Dictionary array on-the-fly when loading the table in the ArrowReader.\r\n- then we still have to return integers when accessing examples from a ClassLabel column. Currently it would return the strings values since it's based on the pandas behavior for converting from pandas to python/numpy. To do so we just have to adapt the python/numpy extractors in formatting.py (it takes care of converting an arrow table to a dictionary of python objects by doing arrow table -> pandas dataframe -> python dictionary)\r\n\r\nAny help on this matter is very much welcome :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4679/comments | https://api.github.com/repos/huggingface/datasets/issues/4679/events | https://github.com/huggingface/datasets/pull/4679 | 1,303,980,648 | PR_kwDODunzps47XX67 | 4,679 | Added method to remove excess nesting in a DatasetDict | [] | closed | false | null | 11 | 2022-07-13T21:49:37Z | 2022-07-21T15:55:26Z | 2022-07-21T10:55:02Z | null | Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https://github.com/huggingface/transformers/pull/18119) to resolve the same issue [#15505](https://github.com/huggingface/transformers/issues/15505).
@stas00 @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4679/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4679.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4679",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4679.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4679"
} | true | [
"Hi ! I think the issue you linked is closed and suggests to use `remove_columns`.\r\n\r\nMoreover if you end up with a dataset with an unnecessarily nested data, please modify your processing functions to not output nested data, or use `map(..., batched=True)` if you function take batches as input",
"Hi @lhoestq , you are right about the issues this pull has steered beyond that issue. I created this [colab notebook](https://colab.research.google.com/drive/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing) to present the error. I tried using batch and that won't resolve it either. I'm looking into that error right now.",
"I think you just need to pass one example at a time to your tokenizer, this way you don't end up with nested data:\r\n```python\r\n\r\ndef preprocessFunction(row):\r\n collatedContext = tokenizer.eos_token.join([row[\"context\"+str(i+1)] for i in range(int(AMT_OF_CONTEXT))])\r\n response = row[\"response\"]\r\n tokenizedContext = tokenizer(\r\n collatedContext, max_length=max_context_length, truncation=True # don't pass as a list here\r\n )\r\n with tokenizer.as_target_tokenizer():\r\n tokenized_response = tokenizer(\r\n response, max_length=max_response_length, truncation=True # don't pass a a list here\r\n )\r\n tokenizedContext[\"labels\"] = tokenized_response[\"input_ids\"]\r\n return tokenizedContext\r\n```",
"Yes that is correct, the purpose of this pull is to advise of a more general solution like with `def remove_excess_nesting(self)` or maybe automate the solution (stas00 advised not to automate it as it could \"not be backwards compatible\").",
"I'm not sure I understand how having `remove_excess_nesting` would make more sense than just fixing the preprocessFunction to simply not return nested samples, can you elaborate ?",
"Figuring out the issue can be a bit difficult to figure out. Only until I added batch does it make a little more sense with the error\r\n\r\n> sequence item 0: expected str instance, list found\r\n\r\nbut batch was never intended.\r\n\r\nWhen you run the colab you will notice that only until collating do you learn there is this error. So i figured it would be better to address it during at the `DatasetDict` level.\r\nI think it would be ideal if the user could be notified at the preprocess function.",
"I'm not arguing that `remove_excess_nesting` is the right solution but what I aim to address is dealing with unnecessary nesting as early as possible.",
"> When you run the colab you will notice that only until collating do you learn there is this error.\r\n\r\nI think users can just check the `dataset.features` and they would notice that the data are nested\r\n```python\r\n{\r\n 'input_ids': Sequence(Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), length=-1, id=None)\r\n ...\r\n}\r\n```\r\n\r\nSometime nested data are intentional, so you can't know in advance if it's a user's mistake or something planned.",
"Yes, I understand, it could be intentional and only the collator has problems with it. So, it is not worth handling it any differently in any other non-erroneous data. \r\n\r\nThat being said do you think there is any use for the `remove_excess_nesting` method? Or maybe it should be applied in a different way? If not feel free to close this PR. ",
"I think users can write it and use `map` themselves if needed, it is pretty straightforward to implement.\r\n\r\nI'm closing this PR if you don't mind, and thank you for the discussion :)",
"No problem @lhoestq , thanks for walking me through it."
] |
https://api.github.com/repos/huggingface/datasets/issues/5458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5458/comments | https://api.github.com/repos/huggingface/datasets/issues/5458/events | https://github.com/huggingface/datasets/issues/5458 | 1,555,054,737 | I_kwDODunzps5csECR | 5,458 | slice split while streaming | [] | closed | false | null | 2 | 2023-01-24T14:08:17Z | 2023-01-24T15:11:47Z | 2023-01-24T15:11:47Z | null | ### Describe the bug
When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported.
Did I miss this in the documentation?
### Steps to reproduce the bug
`load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")`
causes ValueError: Bad split: train[:3]. Available splits: ['train', 'test'] in builder.py, line 1213, in as_streaming_dataset
### Expected behavior
The first 3 entries of the dataset as a stream
### Environment info
- `datasets` version: 2.8.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5458/timeline | null | completed | null | null | false | [
"Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train\").take(3)`\r\n\r\n\r\n",
"Thank you for your quick response!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4895/comments | https://api.github.com/repos/huggingface/datasets/issues/4895/events | https://github.com/huggingface/datasets/issues/4895 | 1,350,798,527 | I_kwDODunzps5Qg4y_ | 4,895 | load_dataset method returns Unknown split "validation" even if this dir exists | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 17 | 2022-08-25T12:11:00Z | 2022-10-06T17:49:28Z | 2022-09-29T08:07:50Z | null | ## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4895/timeline | null | completed | null | null | false | [
"I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n",
"@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https://github.com/huggingface/datasets/pull/4844. ",
"I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)",
"@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~/.cache/huggingface/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.",
"This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ",
"> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !",
"Looks like the `val/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.",
"Thanks for the reply\r\n\r\nI've created a separate [issue](https://github.com/huggingface/datasets/issues/4982#issue-1375604693) for my problem.",
"> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https://github.com/huggingface/datasets/pull/4985",
"Hi there @polinaeterna @mariosasko ! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!",
"hi @shaneacton ! could you please show your dataset structure?",
"Hi there @polinaeterna . My local CSV files are stored as follows:\r\nbinding:\r\n---------- tune.csv\r\n---------- public_data:\r\n--------------------------- train.csv\r\n\r\n`self.list_shards(split)` sucessfully finds the relevant data files",
"@shaneacton do you have `validation.csv`/`val.csv`/`valid.csv`/`dev.csv` file in your data folder? I can't find it in the structure you provided",
"@polinaeterna no, does the name of the split need to match the name of the file exactly?\r\n\r\nBut my train file is not actually named 'train.py' its called 'XXXXXXXXX_train_XXXXXXXX.csv'\r\nAnd the code works fine for train, but fails for validation.\r\nDoes the file name need to _contain_ the split name?",
"@shaneacton what files do you expect to be included in \"validation\" split? yes, you should somehow indicate that a file belongs to a certain split - either by including split name in a filename or by putting it into a folder with split name, you can also check out [this documentation page](https://huggingface.co/docs/datasets/main/en/repository_structure) :)\r\nby default all the data goes to a single `train` split",
"@polinaeterna I have specified my train/test/tune files via the `split_to_filepattern` argument when initialising my `FileDataSource` class. This is how `list_shards` is able to find the right files.\r\nAfter your last message, I have tried renaminig my data files to simply `train.csv` and `validation.csv`, however I am still getting the same error: `Unknown split \"validation\". Should be one of ['train']`",
"@polinaeterna I have solved the issue. The solution was to call:\r\n`load_dataset(\"csv\", data_files={split: files}, split=split)`"
] |
https://api.github.com/repos/huggingface/datasets/issues/646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/646/comments | https://api.github.com/repos/huggingface/datasets/issues/646/events | https://github.com/huggingface/datasets/pull/646 | 704,607,371 | MDExOlB1bGxSZXF1ZXN0NDg5NTAyMTM3 | 646 | Fix docs typos | [] | closed | false | null | 0 | 2020-09-18T19:32:27Z | 2020-09-21T16:30:54Z | 2020-09-21T16:14:12Z | null | This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/646/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/646",
"merged_at": "2020-09-21T16:14:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/646"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1393/comments | https://api.github.com/repos/huggingface/datasets/issues/1393/events | https://github.com/huggingface/datasets/pull/1393 | 760,436,267 | MDExOlB1bGxSZXF1ZXN0NTM1MjY4MjUx | 1,393 | Add script_version suggestion when dataset/metric not found | [] | closed | false | null | 0 | 2020-12-09T15:37:38Z | 2020-12-10T18:17:05Z | 2020-12-10T18:17:05Z | null | Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like:
> Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1/metrics/blah/blah.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/met
rics/blah/blah.py.
If the dataset was added recently, you may need to to pass script_version="master" to find the loading script on the master branch. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1393/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1393.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1393",
"merged_at": "2020-12-10T18:17:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1393.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1393"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1701/comments | https://api.github.com/repos/huggingface/datasets/issues/1701/events | https://github.com/huggingface/datasets/issues/1701 | 781,345,717 | MDU6SXNzdWU3ODEzNDU3MTc= | 1,701 | Some datasets miss dataset_infos.json or dummy_data.zip | [] | closed | false | null | 2 | 2021-01-07T14:17:13Z | 2022-11-04T15:11:16Z | 2022-11-04T15:06:00Z | null | While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_nli
math_dataset
mlqa
ms_marco
newsgroup
qa4mre
qangaroo
reddit_tifu
super_glue
trivia_qa
web_of_science
wmt14
wmt15
wmt16
wmt17
wmt18
wmt19
xtreme
```
But it seems that some of those last do have a "dummy" directory .
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1701/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1701/timeline | null | completed | null | null | false | [
"Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n",
"Closing since the dummy data generation is deprecated now (and the issue with missing metadata seems to be addressed)."
] |
https://api.github.com/repos/huggingface/datasets/issues/2319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2319/comments | https://api.github.com/repos/huggingface/datasets/issues/2319/events | https://github.com/huggingface/datasets/issues/2319 | 876,251,376 | MDU6SXNzdWU4NzYyNTEzNzY= | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-05-05T09:22:52Z | 2021-05-05T10:57:31Z | 2021-05-05T10:50:55Z | null | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```
## Expected results
Anything but an error, really.
## Actual results
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
Downloading: 14.7kB [00:00, 4.91MB/s]
Downloading: 3.07MB [00:00, 32.6MB/s]
Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split
for key, record in utils.tqdm(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples
for line in f:
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined>
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
- Datasets: 1.6.2
- Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
- Platform: Windows-10-10.0.19041-SP0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2319/timeline | null | completed | null | null | false | [
"Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.",
"Awesome, thank you. 😃 ",
"@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`."
] |
https://api.github.com/repos/huggingface/datasets/issues/4240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4240/comments | https://api.github.com/repos/huggingface/datasets/issues/4240/events | https://github.com/huggingface/datasets/pull/4240 | 1,217,287,594 | PR_kwDODunzps423xRl | 4,240 | Fix yield for crd3 | [] | closed | false | null | 2 | 2022-04-27T12:31:36Z | 2022-04-29T12:41:41Z | 2022-04-29T12:41:41Z | null | Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example
Modified the features accordingly
```
"turns": [
{
"names": datasets.features.Sequence(datasets.Value("string")),
"utterances": datasets.features.Sequence(datasets.Value("string")),
"number": datasets.Value("int32"),
}
],
}
```
I wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4240/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4240/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4240.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4240",
"merged_at": "2022-04-29T12:41:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4240.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4240"
} | true | [
"I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str\r\n```\r\n\r\nDo you know what could cause this ? If I understand correctly, `turn` is supposed to be a list of dictionaries right ?",
"> ``` \r\n> \r\n> Do you know what could cause this ? If I understand correctly, turn is supposed to be a list of dictionaries right ?\r\n> ```\r\n\r\nThis is strange. Let me look into this. As per https://github.com/RevanthRameshkumar/CRD3/blob/master/data/aligned%20data/c%3D2/C1E001_2_0.json TURNS is a list of dictionaries. So when we iterate over `row[\"TURNS]\"` each `turn` is essentially a dictionary. Not sure why it's being considered a tuple here."
] |
https://api.github.com/repos/huggingface/datasets/issues/4309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4309/comments | https://api.github.com/repos/huggingface/datasets/issues/4309/events | https://github.com/huggingface/datasets/pull/4309 | 1,231,232,935 | PR_kwDODunzps43lKpm | 4,309 | [WIP] Add TEDLIUM dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | 11 | 2022-05-10T14:12:47Z | 2022-06-17T12:54:40Z | 2022-06-17T11:44:01Z | null | Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3
TODO:
- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script
- [x] Make `load_dataset` work
- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~
- [ ] ~~Create dummy data for continuous testing~~
- [ ] ~~Dummy data tests~~
- [ ] ~~Real data tests~~
- [ ] Create the metadata JSON
- [ ] Close PR and add directly to the Hub under LIUM org | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4309/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4309",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4309"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium/release1 to /home/sanchitgandhi/cache/tedlium/release1/1.0.1/5a9fcb97b4b52d5a1c9dc7bde4b1d5994cd89c4a3425ea36c789bf6096fee4f0...\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/load.py\", line 1703, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 1240, in _download_and_prepare\r\n raise MissingBeamOptions(\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n `load_dataset('tedlium', 'release1', beam_runner='DirectRunner')`\r\n```\r\nSpecifying the `beam_runner='DirectRunner'` works:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache', beam_runner='DirectRunner')\r\n```",
"Extra Python imports/Linux packages:\r\n```\r\npip install pydub\r\nsudo apt install ffmpeg\r\n```",
"Script heavily inspired by the TF datasets script at: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/tedlium.py\r\n\r\nThe TF datasets script uses the module AudioSegment from the package `pydub` (https://github.com/jiaaro/pydub), which is used to to open the audio files (stored in .sph format):\r\nhttps://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L167-L170\r\nThis package requires the pip install of `pydub` and the system installation of `ffmpeg`: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nThe TF datasets script also uses `_build_pcollection`:\r\nhttps://github.com/huggingface/datasets/blob/8afbbb6fe66b40d05574e2e72e65e974c72ae769/datasets/tedlium/tedlium.py#L200-L206\r\nHowever, I was advised against using `beam` logic. Thus, I have reverted to generating the examples file-by-file: https://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L112-L138\r\n\r\nI am now able to generate examples by running the `load_dataset` command:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\nHere, generating examples is **extremely** slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?",
"> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nIt's ok, windows users will have have a bad time but I'm not sure we can do much about it.\r\n\r\n> Here, generating examples is extremely slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?\r\n\r\nNot at the moment. For such cases we advise hosting the dataset ourselves in a processed format. The license doesn't allow this since the license is \"NoDerivatives\". Currently the only way to parallelize it is by keeping is as a beam dataset and let users pay Google Dataflow to process it (or use spark or whatever).",
"Thanks for your super speedy reply @lhoestq!\r\n\r\nI’ve uploaded the script and README.md to the org here: https://huggingface.co/datasets/LIUM/tedlium\r\nIs any modification of the script required to be able to use it from the Hub? When I run:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntedlium = load_dataset(\"LIUM/tedlium\", \"release1\") # for Release 1\r\n```\r\nI get the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 load_dataset(\"LIUM/tedlium\", \"release1\")\r\n\r\nFile ~/datasets/src/datasets/load.py:1676, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1673 ignore_verifications = ignore_verifications or save_infos\r\n 1675 # Create a dataset builder\r\n-> 1676 builder_instance = load_dataset_builder(\r\n 1677 path=path,\r\n 1678 name=name,\r\n 1679 data_dir=data_dir,\r\n 1680 data_files=data_files,\r\n 1681 cache_dir=cache_dir,\r\n 1682 features=features,\r\n 1683 download_config=download_config,\r\n 1684 download_mode=download_mode,\r\n 1685 revision=revision,\r\n 1686 use_auth_token=use_auth_token,\r\n 1687 **config_kwargs,\r\n 1688 )\r\n 1690 # Return iterable dataset in case of streaming\r\n 1691 if streaming:\r\n\r\nFile ~/datasets/src/datasets/load.py:1502, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1500 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1501 download_config.use_auth_token = use_auth_token\r\n-> 1502 dataset_module = dataset_module_factory(\r\n 1503 path,\r\n 1504 revision=revision,\r\n 1505 download_config=download_config,\r\n 1506 download_mode=download_mode,\r\n 1507 data_dir=data_dir,\r\n 1508 data_files=data_files,\r\n 1509 )\r\n 1511 # Get dataset builder class from the processing script\r\n 1512 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:1254, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1249 if isinstance(e1, FileNotFoundError):\r\n 1250 raise FileNotFoundError(\r\n 1251 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1252 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1253 ) from None\r\n-> 1254 raise e1 from None\r\n 1255 else:\r\n 1256 raise FileNotFoundError(\r\n 1257 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory.\"\r\n 1258 )\r\n\r\nFile ~/datasets/src/datasets/load.py:1227, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1225 raise e\r\n 1226 if filename in [sibling.rfilename for sibling in dataset_info.siblings]:\r\n-> 1227 return HubDatasetModuleFactoryWithScript(\r\n 1228 path,\r\n 1229 revision=revision,\r\n 1230 download_config=download_config,\r\n 1231 download_mode=download_mode,\r\n 1232 dynamic_modules_path=dynamic_modules_path,\r\n 1233 ).get_module()\r\n 1234 else:\r\n 1235 return HubDatasetModuleFactoryWithoutScript(\r\n 1236 path,\r\n 1237 revision=revision,\r\n (...)\r\n 1241 download_mode=download_mode,\r\n 1242 ).get_module()\r\n\r\nFile ~/datasets/src/datasets/load.py:940, in HubDatasetModuleFactoryWithScript.get_module(self)\r\n 938 def get_module(self) -> DatasetModule:\r\n 939 # get script and other files\r\n--> 940 local_path = self.download_loading_script()\r\n 941 dataset_infos_path = self.download_dataset_infos_file()\r\n 942 imports = get_imports(local_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:918, in HubDatasetModuleFactoryWithScript.download_loading_script(self)\r\n 917 def download_loading_script(self) -> str:\r\n--> 918 file_path = hf_hub_url(path=self.name, name=self.name.split(\"/\")[1] + \".py\", revision=self.revision)\r\n 919 download_config = self.download_config.copy()\r\n 920 if download_config.download_desc is None:\r\n\r\nTypeError: hf_hub_url() got an unexpected keyword argument 'name'\r\n```\r\n\r\nNote that I am able to load the dataset from the `datasets` repo with the following lines of code:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```",
"What version of `datasets` do you have ?\r\nUpdating `datasets` should fix the error ;)\r\n",
"> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\n`soundfile`, which is a required audio dependency, should also work with `.sph` files, no?",
"> `soundfile`, which is a required audio dependency, should also work with `.sph` files, no?\r\n\r\nAwesome, thanks for the pointer @mariosasko! Switched `pydub` to `soundfile`, and having specifying the `dtype` argument in `soundfile.read` as `np.int16`, the arrays match with those from `pydub` ✅\r\n\r\nI also did some heavy optimising of the script with the processing of the `.stm` and `.sph` files - it now runs 2000x faster than before, so there probably isn't a need to upload the data to the Hub @lhoestq. The total processing time is just ~2mins now 🚀\r\n",
"TEDLIUM completed and uploaded to the HF Hub: https://huggingface.co/datasets/LIUM/tedlium",
"Awesome !"
] |
https://api.github.com/repos/huggingface/datasets/issues/5243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5243/comments | https://api.github.com/repos/huggingface/datasets/issues/5243/events | https://github.com/huggingface/datasets/issues/5243 | 1,449,523,962 | I_kwDODunzps5WZfr6 | 5,243 | Download only split data | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 5 | 2022-11-15T10:15:54Z | 2023-05-02T09:27:51Z | null | null | ### Feature request
Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed.
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test",
cache_dir="cache/path...",
use_auth_token=True,
download_config=DownloadConfig(delete_extracted='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')
)
### Motivation
efficiency improvement
### Your contribution
n/a | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5243/timeline | null | null | null | null | false | [
"Hi @capsabogdan! Unfortunately, it's hard to implement because quite often datasets data is being hosted in a single archive for all splits :( So we have to download the whole archive to split it into splits. This is the case for CommonVoice too. \r\n\r\nHowever, for cases when data is distributed in separate archives ащк different splits I suppose it can (and will) be implemented someday. \r\n\r\n\r\nBtw for quick check of the dataset you can use [streaming](https://huggingface.co/docs/datasets/stream):\r\n```python\r\ncv = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\", split=\"test\", streaming=True)\r\ncv = iter(cv)\r\nprint(next(cv))\r\n\r\n>> {'client_id': 'a07b17f8234ded5e847443ea6f423cef745cbbc7537fb637d58326000aa751e829a21c4fd0a35fc17fb833aa7e95ebafce5efd19beeb8d843887b85e4eb35f5b',\r\n>> 'path': None,\r\n>> 'audio': {'path': 'cv-corpus-11.0-2022-09-21/en/clips/common_voice_en_100363.mp3',\r\n>> 'array': array([ 0.0000000e+00, 1.1748125e-14, 1.5450088e-14, ...,\r\n>> 1.3011958e-06, -6.3548953e-08, -9.9098514e-08], dtype=float32),\r\n>> ...}\r\n\r\n```",
"thank you for the answer but am not sure if this will not be helpful, as we\nneed maybe just 10% of the datasets for some experiment\n\ncan we get just a portion of the dataset with stream?\n\n\nis there really no solution? :(\n\nAm Di., 15. Nov. 2022 um 16:55 Uhr schrieb Polina Kazakova <\n***@***.***>:\n\n> Hi @capsabogdan <https://github.com/capsabogdan>! Unfortunately, it's\n> hard to implement because quite often datasets data is being hosted in a\n> single archive for all splits :( So we have to download the whole archive\n> to split it into splits. This is the case for CommonVoice too.\n>\n> However, for cases when data is distributed in separate archives in\n> different splits I suppose it can be implemented someday.\n>\n> Btw for quick check of the dataset you can use streaming\n> <https://huggingface.co/docs/datasets/stream>:\n>\n> cv = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\", split=\"test\", streaming=True)cv = iter(cv)print(next(cv))\n> >> {'client_id': 'a07b17f8234ded5e847443ea6f423cef745cbbc7537fb637d58326000aa751e829a21c4fd0a35fc17fb833aa7e95ebafce5efd19beeb8d843887b85e4eb35f5b',>> 'path': None,>> 'audio': {'path': 'cv-corpus-11.0-2022-09-21/en/clips/common_voice_en_100363.mp3',>> 'array': array([ 0.0000000e+00, 1.1748125e-14, 1.5450088e-14, ...,>> 1.3011958e-06, -6.3548953e-08, -9.9098514e-08], dtype=float32),>> ...}\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5243#issuecomment-1315512887>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOC3JYRCTH54OBRUJULWIOW6PANCNFSM6AAAAAASAYO2LY>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"maybe it would be nice if you guys ould do some sort of shard before\nloading the dataset, so users can download just chunks of data :)\n\nI think this would be very helpful\n\nAm Di., 15. Nov. 2022 um 19:24 Uhr schrieb Bogdan Capsa <\n***@***.***>:\n\n> thank you for the answer but am not sure if this will not be helpful, as\n> we need maybe just 10% of the datasets for some experiment\n>\n> can we get just a portion of the dataset with stream?\n>\n>\n> is there really no solution? :(\n>\n> Am Di., 15. Nov. 2022 um 16:55 Uhr schrieb Polina Kazakova <\n> ***@***.***>:\n>\n>> Hi @capsabogdan <https://github.com/capsabogdan>! Unfortunately, it's\n>> hard to implement because quite often datasets data is being hosted in a\n>> single archive for all splits :( So we have to download the whole archive\n>> to split it into splits. This is the case for CommonVoice too.\n>>\n>> However, for cases when data is distributed in separate archives in\n>> different splits I suppose it can be implemented someday.\n>>\n>> Btw for quick check of the dataset you can use streaming\n>> <https://huggingface.co/docs/datasets/stream>:\n>>\n>> cv = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\", split=\"test\", streaming=True)cv = iter(cv)print(next(cv))\n>> >> {'client_id': 'a07b17f8234ded5e847443ea6f423cef745cbbc7537fb637d58326000aa751e829a21c4fd0a35fc17fb833aa7e95ebafce5efd19beeb8d843887b85e4eb35f5b',>> 'path': None,>> 'audio': {'path': 'cv-corpus-11.0-2022-09-21/en/clips/common_voice_en_100363.mp3',>> 'array': array([ 0.0000000e+00, 1.1748125e-14, 1.5450088e-14, ...,>> 1.3011958e-06, -6.3548953e-08, -9.9098514e-08], dtype=float32),>> ...}\n>>\n>> —\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/datasets/issues/5243#issuecomment-1315512887>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ALSIFOC3JYRCTH54OBRUJULWIOW6PANCNFSM6AAAAAASAYO2LY>\n>> .\n>> You are receiving this because you were mentioned.Message ID:\n>> ***@***.***>\n>>\n>\n",
"+1 on this feature request - I am running into the same problem, where I only need the test set for a dataset that has a huge training set",
"Hey, I'm also interested in that as a feature. I'm having the same problem with Common Voice 13.0. The dataset is super big but I only want the test data to benchmark multilingual models, but I don't have much Terabytes to store all the dataset..."
] |
https://api.github.com/repos/huggingface/datasets/issues/1015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1015/comments | https://api.github.com/repos/huggingface/datasets/issues/1015/events | https://github.com/huggingface/datasets/pull/1015 | 755,508,841 | MDExOlB1bGxSZXF1ZXN0NTMxMjA2MTgy | 1,015 | add hard dataset | [] | closed | false | null | 1 | 2020-12-02T18:27:36Z | 2020-12-03T15:03:54Z | 2020-12-03T15:03:54Z | null | Hotel Reviews in Arabic language. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1015/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1015",
"merged_at": "2020-12-03T15:03:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1015"
} | true | [
"Thanks @sumanthd17 that fixed it. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1898/comments | https://api.github.com/repos/huggingface/datasets/issues/1898/events | https://github.com/huggingface/datasets/issues/1898 | 810,157,251 | MDU6SXNzdWU4MTAxNTcyNTE= | 1,898 | ALT dataset has repeating instances in all splits | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 4 | 2021-02-17T12:51:42Z | 2021-02-19T06:18:46Z | 2021-02-19T06:18:46Z | null | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `explore-datset` feature, for quick reference.
![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1898/timeline | null | completed | null | null | false | [
"Thanks for reporting. This looks like a very bad issue. I'm looking into it",
"I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch",
"Thanks!!! works perfectly in the bleading edge master version",
"Closed by #1899"
] |
https://api.github.com/repos/huggingface/datasets/issues/4787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4787/comments | https://api.github.com/repos/huggingface/datasets/issues/4787/events | https://github.com/huggingface/datasets/issues/4787 | 1,328,243,911 | I_kwDODunzps5PK2TH | 4,787 | NonMatchingChecksumError in mbpp dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-08-04T08:15:51Z | 2022-08-04T17:21:01Z | 2022-08-04T17:21:01Z | null | ## Describe the bug
As reported on the Hub [Fix Checksum Mismatch](https://huggingface.co/datasets/mbpp/discussions/1), there is a `NonMatchingChecksumError` when loading mbpp dataset
## Steps to reproduce the bug
```python
ds = load_dataset("mbpp", "full")
```
## Expected results
Loading of the dataset without any exception raised.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-1-a3fbdd3ed82e> in <module>
----> 1 ds = load_dataset("mbpp", "full")
.../huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1791
1792 # Download and prepare data
-> 1793 builder_instance.download_and_prepare(
1794 download_config=download_config,
1795 download_mode=download_mode,
.../huggingface/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
--> 775 verify_checksums(
776 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
777 )
.../huggingface/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://raw.githubusercontent.com/google-research/google-research/master/mbpp/mbpp.jsonl']
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4787/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4014/comments | https://api.github.com/repos/huggingface/datasets/issues/4014/events | https://github.com/huggingface/datasets/pull/4014 | 1,180,481,229 | PR_kwDODunzps41AGBu | 4,014 | Support streaming id_clickbait dataset | [] | closed | false | null | 1 | 2022-03-25T08:18:28Z | 2022-03-25T08:58:31Z | 2022-03-25T08:53:32Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4014/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4014.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4014",
"merged_at": "2022-03-25T08:53:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4014.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4014"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2369/comments | https://api.github.com/repos/huggingface/datasets/issues/2369/events | https://github.com/huggingface/datasets/pull/2369 | 893,554,153 | MDExOlB1bGxSZXF1ZXN0NjQ2MDQ5NDM1 | 2,369 | correct labels of conll2003 | [] | closed | false | null | 0 | 2021-05-17T17:37:54Z | 2021-05-18T08:27:42Z | 2021-05-18T08:27:42Z | null | # What does this PR
It fixes/extends the `ner_tags` for conll2003 to include all.
Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf
Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2369/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2369",
"merged_at": "2021-05-18T08:27:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2369"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2924/comments | https://api.github.com/repos/huggingface/datasets/issues/2924/events | https://github.com/huggingface/datasets/issues/2924 | 997,378,113 | I_kwDODunzps47cshB | 2,924 | "File name too long" error for file locks | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 10 | 2021-09-15T18:16:50Z | 2023-03-28T06:50:18Z | 2021-10-29T09:42:24Z | null | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2924/timeline | null | completed | null | null | false | [
"Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135",
"Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info.",
"Snap, encountered when trying to run [this example from PyTorch Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html):\r\n\r\n```py\r\nimport torch\r\n\r\nimport flash\r\nfrom flash.audio import SpeechRecognition, SpeechRecognitionData\r\nfrom flash.core.data.utils import download_data\r\n\r\n# 1. Create the DataModule\r\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/timit_data.zip\", \"./data\")\r\n\r\ndatamodule = SpeechRecognitionData.from_json(\r\n input_fields=\"file\",\r\n target_fields=\"text\",\r\n train_file=\"data/timit/train.json\",\r\n test_file=\"data/timit/test.json\",\r\n)\r\n```\r\n\r\nGave this traceback:\r\n\r\n```py\r\nTraceback (most recent call last):\r\n File \"lf_ft.py\", line 10, in <module>\r\n datamodule = SpeechRecognitionData.from_json(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 1005, in from_json\r\n return cls.from_data_source(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 571, in from_data_source\r\n train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 307, in to_datasets\r\n train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 344, in generate_dataset\r\n data = load_data(data, mock_dataset)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py\", line 103, in load_data\r\n dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1599, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1457, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py\", line 285, in __init__\r\n with FileLock(lock_path):\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 403, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'\r\n```\r\n\r\nMy home directory is encrypted, therefore the maximum length is 143 ([source 1](https://github.com/ray-project/ray/issues/1463#issuecomment-425674521), [source 2](https://stackoverflow.com/a/6571568/2668831))\r\n\r\nFrom what I've read I think the error is in reference to the file name (just the final part of the path) which is 145 chars long:\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock\")\r\n145\r\n```\r\n\r\nI also have a file in this directory (i.e. whose length is not a problem):\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_librispeech_asr_clean_2.1.0_468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1.lock\")\r\n137\r\n```",
"Perhaps this could be exposed as a config setting so you could change it manually?\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135\r\n\r\nRather than hard-code 255, default it to 255, and allow it to be changed, the same way is done for `datasets.config.IN_MEMORY_MAX_SIZE`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L171-L173\r\n\r\nIn fact there already appears to be an existing variable to do so:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L187\r\n\r\nIt's used here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/efe89edd36e4ffa562fc3eebaf07a5fec26e6dac/src/datasets/builder.py#L163-L165\r\n\r\nPerhaps it could be set based on a test (trying to create a 255 char length named lock file and seeing if it fails)",
"Just fixed it, sending a PR :smile:",
"Hi @lmmx @gar1t ,\r\n\r\nit would be helpful if you could run the following code and copy-paste the output here:\r\n```python\r\nimport datasets\r\nimport os\r\nos.statvfs(datasets.config.HF_DATASETS_CACHE)\r\n```",
"`os.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=240046344, f_bfree=96427610, f_bavail=84216487, f_files=61038592, f_ffree=58216027, f_favail=58216027, f_flag=4102, f_namemax=143)`",
"Hi @lmmx,\r\n\r\nThanks for providing the result of the command. I've opened a PR, and it would be great if you could verify that the fix works on your system. To install the version of the datasets with the fix, please run the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-2924\r\n```\r\n\r\nBtw, I saw your PR, and I appreciate your effort. However, my approach is a bit simpler for the end-user, so that's why I decided to fix the issue myself.",
"No problem Mario I didn't know that was where that value was recorded so I learnt something :smiley: I just wanted to get a local version working, of course you should implement whatever fix is best for HF. Yes can confirm this fixes it too. Thanks!",
"Hello @mariosasko \r\n\r\nHas this fix shown up in the 2.10.1 version of huggingface datasets?"
] |
https://api.github.com/repos/huggingface/datasets/issues/4254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4254/comments | https://api.github.com/repos/huggingface/datasets/issues/4254/events | https://github.com/huggingface/datasets/pull/4254 | 1,220,204,395 | PR_kwDODunzps43Bwnj | 4,254 | Replace data URL in SAMSum dataset and support streaming | [] | closed | false | null | 1 | 2022-04-29T08:21:43Z | 2022-05-06T08:38:16Z | 2022-04-29T16:26:09Z | null | This PR replaces data URL in SAMSum dataset:
- original host (arxiv.org) does not allow HTTP Range requests
- we have hosted the data on the Hub (license: CC BY-NC-ND 4.0)
Moreover, it implements support for streaming.
Fix #4146.
Related to: #4236.
CC: @severo | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4254/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4254",
"merged_at": "2022-04-29T16:26:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4254"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4935/comments | https://api.github.com/repos/huggingface/datasets/issues/4935/events | https://github.com/huggingface/datasets/issues/4935 | 1,363,226,736 | I_kwDODunzps5RQTBw | 4,935 | Dataset Viewer issue for ubuntu_dialogs_corpus | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-09-06T12:41:50Z | 2022-09-06T12:51:25Z | 2022-09-06T12:51:25Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4935/timeline | null | completed | null | null | false | [
"The dataset maintainers (https://huggingface.co/datasets/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thanks for reporting."
] |
https://api.github.com/repos/huggingface/datasets/issues/3785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3785/comments | https://api.github.com/repos/huggingface/datasets/issues/3785/events | https://github.com/huggingface/datasets/pull/3785 | 1,150,069,801 | PR_kwDODunzps4zciES | 3,785 | Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset) | [] | closed | false | null | 8 | 2022-02-25T05:48:57Z | 2022-03-03T16:43:47Z | 2022-03-03T14:03:37Z | null | This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets.
So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ
The new link now looks like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t
Fixes #3784 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3785/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3785/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3785",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3785"
} | true | [
"Thank you, @albertvillanova!",
"Got it. Thanks for explaining this, @albertvillanova!\r\n\r\n> On the other hand, the tests are not passing because the dummy data should also be fixed. Once done, this PR will be able to be merged into master.\r\n\r\nWill do this 👍",
"Hi ! I think we need to fix the issue for every dataset. This can be done simply by fixing how we handle Google Drive links, see my comment https://github.com/huggingface/datasets/pull/3775#issuecomment-1050970157",
"Hi @lhoestq! I think @albertvillanova has already fixed this in #3787",
"Cool ! I missed this one :) thanks",
"No problem!",
"Hi, @AngadSethi, I think that once:\r\n- #3787 \r\n\r\nwas merged, issue:\r\n- #3784 \r\n\r\nwas also fixed.\r\n\r\nTherefore, I think this PR is no longer necessary. I'm closing it. Let me know if you agree.",
"Yes, absolutely @albertvillanova! I agree :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2630/comments | https://api.github.com/repos/huggingface/datasets/issues/2630/events | https://github.com/huggingface/datasets/issues/2630 | 942,102,956 | MDU6SXNzdWU5NDIxMDI5NTY= | 2,630 | Progress bars are not properly rendered in Jupyter notebook | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-12T14:07:13Z | 2022-02-03T15:55:33Z | 2022-02-03T15:55:33Z | null | ## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress bars.
## Actual results
Simple plane progress bars.
cc: Reported by @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2630/timeline | null | completed | null | null | false | [
"To add my experience when trying to debug this issue:\r\n\r\nSeems like previously the workaround given [here](https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter/tqdm I still get terminal warnings that IPython tried to send a message from a forked process.",
"Hi @mludv, thanks for the hint!!! :) \r\n\r\nWe will definitely take it into account to try to fix this issue... It seems somehow related to `multiprocessing` and `tqdm`..."
] |
https://api.github.com/repos/huggingface/datasets/issues/4083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4083/comments | https://api.github.com/repos/huggingface/datasets/issues/4083/events | https://github.com/huggingface/datasets/pull/4083 | 1,190,025,878 | PR_kwDODunzps41gEbu | 4,083 | Add SacreBLEU Metric Card | [] | closed | false | null | 1 | 2022-04-01T16:24:56Z | 2022-04-12T20:45:00Z | 2022-04-12T20:38:40Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4083/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4083",
"merged_at": "2022-04-12T20:38:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4083"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4853/comments | https://api.github.com/repos/huggingface/datasets/issues/4853/events | https://github.com/huggingface/datasets/pull/4853 | 1,339,456,490 | PR_kwDODunzps49NFNL | 4,853 | Fix bug and checksums in exams dataset | [] | closed | false | null | 1 | 2022-08-15T20:17:57Z | 2022-08-16T06:43:57Z | 2022-08-16T06:29:06Z | null | Fix #4852. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4853/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4853",
"merged_at": "2022-08-16T06:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4853"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2075/comments | https://api.github.com/repos/huggingface/datasets/issues/2075/events | https://github.com/huggingface/datasets/issues/2075 | 834,301,246 | MDU6SXNzdWU4MzQzMDEyNDY= | 2,075 | ConnectionError: Couldn't reach common_voice.py | [] | closed | false | null | 2 | 2021-03-18T01:19:06Z | 2021-03-20T10:29:41Z | 2021-03-20T10:29:41Z | null | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py
Version:
1.4.1
Thanks! @lhoestq @LysandreJik @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2075/timeline | null | completed | null | null | false | [
"Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?",
"@albertvillanova Thanks! It works well now. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1421/comments | https://api.github.com/repos/huggingface/datasets/issues/1421/events | https://github.com/huggingface/datasets/pull/1421 | 760,706,851 | MDExOlB1bGxSZXF1ZXN0NTM1NDkzMzU4 | 1,421 | adding fake-news-english-2 | [] | closed | false | null | 0 | 2020-12-09T22:05:13Z | 2020-12-13T00:48:49Z | 2020-12-13T00:48:49Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1421/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1421",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1421"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/4092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4092/comments | https://api.github.com/repos/huggingface/datasets/issues/4092/events | https://github.com/huggingface/datasets/pull/4092 | 1,192,499,903 | PR_kwDODunzps41n40R | 4,092 | Fix dataset `amazon_us_reviews` metadata - 4/4/2022 | [] | closed | false | null | 2 | 2022-04-05T01:39:45Z | 2022-04-08T12:35:41Z | 2022-04-08T12:29:31Z | null | Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4092/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4092.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4092",
"merged_at": "2022-04-08T12:29:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4092.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4092"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"cc: @albertvillanova just FYI"
] |
https://api.github.com/repos/huggingface/datasets/issues/1411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1411/comments | https://api.github.com/repos/huggingface/datasets/issues/1411/events | https://github.com/huggingface/datasets/pull/1411 | 760,606,290 | MDExOlB1bGxSZXF1ZXN0NTM1NDEwNjU3 | 1,411 | 2 typos | [] | closed | false | null | 0 | 2020-12-09T19:24:34Z | 2020-12-11T10:39:05Z | 2020-12-11T10:39:05Z | null | Corrected 2 typos | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1411/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1411",
"merged_at": "2020-12-11T10:39:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1411"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4453/comments | https://api.github.com/repos/huggingface/datasets/issues/4453/events | https://github.com/huggingface/datasets/issues/4453 | 1,262,674,105 | I_kwDODunzps5LQuC5 | 4,453 | Dataset Viewer issue for Yaxin/SemEval2015 | [] | closed | false | null | 3 | 2022-06-07T03:30:08Z | 2022-06-09T08:34:16Z | 2022-06-09T08:34:16Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4453/timeline | null | completed | null | null | false | [
"I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```",
"`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps://huggingface.co/datasets/Yaxin/SemEval2015/discussions/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !",
"Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`."
] |
https://api.github.com/repos/huggingface/datasets/issues/4460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4460/comments | https://api.github.com/repos/huggingface/datasets/issues/4460/events | https://github.com/huggingface/datasets/pull/4460 | 1,264,644,205 | PR_kwDODunzps45UHIs | 4,460 | Drop Python 3.6 support | [] | closed | false | null | 5 | 2022-06-08T12:10:18Z | 2022-07-26T19:16:39Z | 2022-07-26T19:04:21Z | null | Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4460/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4460.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4460",
"merged_at": "2022-07-26T19:04:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4460.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4460"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I've disabled the `test_dummy_dataset_serialize_s3` tests in the Linux CI to avoid the failures (these tests only fail on Windows in 3.6). These failures are unrelated to this PR's changes, and I would like to address this in a new PR.",
"[This comment](https://github.com/pytorch/audio/issues/2363#issuecomment-1179089175) explains the issue with MP3 decoding in `torchaudio` in the latest release (supports Python 3.7+). I fixed CI by pinning `torchaudio` to `<0.12.0`. Another way to fix this issue is by installing `ffmpeg` with conda or using the unofficial GH action. But I don't think it's worth making CI more complex, considering we can wait for the soundfile release, which should bring MP3 decoding, and drop the `torchaudio` dependency then.",
"Yay for dropping Python 3.6!",
"I think we can merge in this state. Also, if an env has Python version < 3.7 installed, we raise a warning, so I don't think we even need to create (and pin) an issue to notify the contributors of this change."
] |
https://api.github.com/repos/huggingface/datasets/issues/1808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1808/comments | https://api.github.com/repos/huggingface/datasets/issues/1808/events | https://github.com/huggingface/datasets/issues/1808 | 798,879,180 | MDU6SXNzdWU3OTg4NzkxODA= | 1,808 | writing Datasets in a human readable format | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | 3 | 2021-02-02T02:55:40Z | 2022-06-01T15:38:13Z | 2022-06-01T15:38:13Z | null | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1808/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1808/timeline | null | completed | null | null | false | [
"AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\r\nYou can convert the Arrow table to a pandas dataframe to save the data as csv as follows:\r\n```python\r\narrow_table = dataset.data\r\ndataframe = arrow_table.to_pandas()\r\ndataframe.to_csv(\"/path/to/file.csv\")\r\n```\r\n\r\nSimilarly, you can convert the dataset to a Python dict and save it as JSON:\r\n```python\r\nimport json\r\narrow_table = dataset.data\r\npy_dict = arrow_table.to_pydict()\r\nwith open(\"/path/to/file.json\", \"w+\") as f:\r\n json.dump(py_dict, f)\r\n```",
"Indeed this works as long as you have enough memory.\r\nIt would be amazing to have export options like csv, json etc. !\r\n\r\nIt should be doable to implement something that iterates through the dataset batch by batch to write to csv for example.\r\nThere is already an `export` method but currently the only export type that is supported is `tfrecords`.",
"Hi! `datasets` now supports `Dataset.to_csv` and `Dataset.to_json` for saving data in a human readable format."
] |
https://api.github.com/repos/huggingface/datasets/issues/99 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/99/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/99/comments | https://api.github.com/repos/huggingface/datasets/issues/99/events | https://github.com/huggingface/datasets/pull/99 | 618,026,700 | MDExOlB1bGxSZXF1ZXN0NDE3ODMxNjky | 99 | [Cmrc 2018] fix cmrc2018 | [] | closed | false | null | 0 | 2020-05-14T08:22:03Z | 2020-05-14T08:49:42Z | 2020-05-14T08:49:41Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/99/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/99/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/99.diff",
"html_url": "https://github.com/huggingface/datasets/pull/99",
"merged_at": "2020-05-14T08:49:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/99.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/99"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3427/comments | https://api.github.com/repos/huggingface/datasets/issues/3427/events | https://github.com/huggingface/datasets/pull/3427 | 1,078,782,159 | PR_kwDODunzps4vxb_y | 3,427 | Add The Pile Enron Emails subset | [] | closed | false | null | 0 | 2021-12-13T17:14:16Z | 2021-12-14T17:30:59Z | 2021-12-14T17:30:57Z | null | Add:
- Enron Emails subset of The Pile: "enron_emails" config
Close bigscience-workshop/data_tooling#310.
CC: @StellaAthena | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3427/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3427",
"merged_at": "2021-12-14T17:30:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3427"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5714/comments | https://api.github.com/repos/huggingface/datasets/issues/5714/events | https://github.com/huggingface/datasets/pull/5714 | 1,657,388,033 | PR_kwDODunzps5NxIOc | 5,714 | Fix xnumpy_load for .npz files | [] | closed | false | null | 2 | 2023-04-06T13:01:45Z | 2023-04-07T09:23:54Z | 2023-04-07T09:16:57Z | null | PR:
- #5626
implemented support for streaming `.npy` files by using `numpy.load`.
However, it introduced a bug when used with `.npz` files, within a context manager:
```
ValueError: seek of closed file
```
or in streaming mode:
```
ValueError: I/O operation on closed file.
```
This PR fixes the bug and tests for both `.npy` and `.npz` files.
Fix #5711. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5714/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5714",
"merged_at": "2023-04-07T09:16:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5714"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.004406 / 0.011008 (-0.006602) | 0.097136 / 0.038508 (0.058628) | 0.027711 / 0.023109 (0.004601) | 0.303092 / 0.275898 (0.027194) | 0.336804 / 0.323480 (0.013324) | 0.004838 / 0.007986 (-0.003148) | 0.004533 / 0.004328 (0.000204) | 0.075062 / 0.004250 (0.070812) | 0.035105 / 0.037052 (-0.001947) | 0.310245 / 0.258489 (0.051756) | 0.347086 / 0.293841 (0.053245) | 0.030867 / 0.128546 (-0.097679) | 0.011436 / 0.075646 (-0.064211) | 0.320728 / 0.419271 (-0.098544) | 0.042303 / 0.043533 (-0.001230) | 0.308177 / 0.255139 (0.053038) | 0.333673 / 0.283200 (0.050473) | 0.084736 / 0.141683 (-0.056947) | 1.477391 / 1.452155 (0.025237) | 1.530399 / 1.492716 (0.037682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212698 / 0.018006 (0.194692) | 0.409098 / 0.000490 (0.408608) | 0.004202 / 0.000200 (0.004002) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022725 / 0.037411 (-0.014686) | 0.095866 / 0.014526 (0.081340) | 0.104153 / 0.176557 (-0.072404) | 0.162964 / 0.737135 (-0.574171) | 0.106505 / 0.296338 (-0.189834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431336 / 0.215209 (0.216127) | 4.283290 / 2.077655 (2.205635) | 1.982418 / 1.504120 (0.478298) | 1.762104 / 1.541195 (0.220909) | 1.807528 / 1.468490 (0.339038) | 0.695507 / 4.584777 (-3.889270) | 3.376299 / 3.745712 (-0.369413) | 1.856642 / 5.269862 (-3.413219) | 1.154258 / 4.565676 (-3.411419) | 0.082749 / 0.424275 (-0.341526) | 0.012289 / 0.007607 (0.004682) | 0.525842 / 0.226044 (0.299798) | 5.285764 / 2.268929 (3.016835) | 2.389926 / 55.444624 (-53.054698) | 2.021830 / 6.876477 (-4.854646) | 2.107460 / 2.142072 (-0.034612) | 0.808118 / 4.805227 (-3.997109) | 0.150791 / 6.500664 (-6.349873) | 0.065825 / 0.075469 (-0.009644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206939 / 1.841788 (-0.634849) | 13.795902 / 8.074308 (5.721594) | 14.107950 / 10.191392 (3.916558) | 0.144300 / 0.680424 (-0.536124) | 0.016478 / 0.534201 (-0.517723) | 0.379395 / 0.579283 (-0.199888) | 0.388437 / 0.434364 (-0.045927) | 0.451443 / 0.540337 (-0.088894) | 0.523142 / 1.386936 (-0.863794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006503 / 0.011353 (-0.004850) | 0.004578 / 0.011008 (-0.006430) | 0.076278 / 0.038508 (0.037770) | 0.028052 / 0.023109 (0.004943) | 0.337873 / 0.275898 (0.061975) | 0.371368 / 0.323480 (0.047888) | 0.005086 / 0.007986 (-0.002899) | 0.003354 / 0.004328 (-0.000975) | 0.076876 / 0.004250 (0.072625) | 0.039146 / 0.037052 (0.002093) | 0.340299 / 0.258489 (0.081810) | 0.381209 / 0.293841 (0.087368) | 0.031771 / 0.128546 (-0.096775) | 0.011670 / 0.075646 (-0.063976) | 0.085156 / 0.419271 (-0.334116) | 0.041990 / 0.043533 (-0.001543) | 0.338644 / 0.255139 (0.083505) | 0.362461 / 0.283200 (0.079262) | 0.089772 / 0.141683 (-0.051911) | 1.480341 / 1.452155 (0.028187) | 1.562815 / 1.492716 (0.070099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205700 / 0.018006 (0.187694) | 0.402206 / 0.000490 (0.401716) | 0.001212 / 0.000200 (0.001012) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025172 / 0.037411 (-0.012240) | 0.100959 / 0.014526 (0.086433) | 0.108464 / 0.176557 (-0.068093) | 0.161321 / 0.737135 (-0.575814) | 0.114245 / 0.296338 (-0.182093) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437425 / 0.215209 (0.222216) | 4.362212 / 2.077655 (2.284557) | 2.068815 / 1.504120 (0.564695) | 1.864089 / 1.541195 (0.322894) | 1.909038 / 1.468490 (0.440548) | 0.696097 / 4.584777 (-3.888680) | 3.358628 / 3.745712 (-0.387084) | 2.999085 / 5.269862 (-2.270777) | 1.533917 / 4.565676 (-3.031760) | 0.083010 / 0.424275 (-0.341266) | 0.012372 / 0.007607 (0.004765) | 0.539926 / 0.226044 (0.313882) | 5.438326 / 2.268929 (3.169397) | 2.498581 / 55.444624 (-52.946043) | 2.153359 / 6.876477 (-4.723117) | 2.177891 / 2.142072 (0.035819) | 0.803169 / 4.805227 (-4.002059) | 0.151079 / 6.500664 (-6.349585) | 0.065981 / 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336682 / 1.841788 (-0.505106) | 14.133055 / 8.074308 (6.058747) | 14.033972 / 10.191392 (3.842580) | 0.152109 / 0.680424 (-0.528315) | 0.016475 / 0.534201 (-0.517726) | 0.387808 / 0.579283 (-0.191475) | 0.378347 / 0.434364 (-0.056017) | 0.484732 / 0.540337 (-0.055606) | 0.569907 / 1.386936 (-0.817029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1c4ec00511868bd881e84a6f7e0333648d833b8e \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/147/comments | https://api.github.com/repos/huggingface/datasets/issues/147/events | https://github.com/huggingface/datasets/issues/147 | 619,581,907 | MDU6SXNzdWU2MTk1ODE5MDc= | 147 | Error with sklearn train_test_split | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2020-05-17T00:28:24Z | 2020-06-18T16:23:23Z | 2020-06-18T16:23:23Z | null | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)
```
throws:
```
ValueError: Can only get row(s) (int or slice) or columns (string).
```
It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/147/timeline | null | completed | null | null | false | [
"Indeed. Probably we will want to have a similar method directly in the library",
"Related: #166 "
] |
https://api.github.com/repos/huggingface/datasets/issues/2122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2122/comments | https://api.github.com/repos/huggingface/datasets/issues/2122/events | https://github.com/huggingface/datasets/pull/2122 | 842,194,588 | MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0 | 2,122 | Fast table queries with interpolation search | [] | closed | false | null | 0 | 2021-03-26T18:09:20Z | 2021-08-04T18:11:59Z | 2021-04-06T14:33:01Z | null | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed).
## Benchmark
Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows):
for the current implementation
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.018ms
Avg access time key=74004227 : 0.215ms
Avg access time key=range(74003204, 74004228) : 1.416ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.187ms
Avg access time key=74004227 : 6.642ms
Avg access time key=range(74003204, 74004228) : 90.941ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms
```
for the new one using interpolation search:
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.076ms
Avg access time key=74004227 : 0.056ms
Avg access time key=range(74003204, 74004228) : 1.807ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.061ms
Avg access time key=74004227 : 0.058ms
Avg access time key=range(74003204, 74004228) : 22.166ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms
```
The RandIter class is just an iterable of 1024 random indices from 0 to 74004228.
Here is also a plot showing the speed improvement depending on the dataset size:
![image](https://user-images.githubusercontent.com/42851186/112673587-32335c80-8e65-11eb-9a0c-58ad774abaec.png)
## Implementation details:
- `datasets.table.Table` objects implement interpolation search for the `slice` method
- The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized.
- `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search
- `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary.
- Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table`
## Checklist:
- [x] implement interpolation search
- [x] use `datasets.table.Table` in `Dataset` objects
- [x] update current tests
- [x] add tests for interpolation search
- [x] comments and docstring
- [x] add the benchmark to the CI
Fix #1803. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2122/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"merged_at": "2021-04-06T14:33:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2122"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2939/comments | https://api.github.com/repos/huggingface/datasets/issues/2939/events | https://github.com/huggingface/datasets/pull/2939 | 999,639,630 | PR_kwDODunzps4r58Gu | 2,939 | MENYO-20k repo has moved, updating URL | [] | closed | false | null | 0 | 2021-09-17T19:01:54Z | 2021-09-21T15:31:37Z | 2021-09-21T15:31:36Z | null | Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match.
https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2939/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2939",
"merged_at": "2021-09-21T15:31:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2939"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/344/comments | https://api.github.com/repos/huggingface/datasets/issues/344/events | https://github.com/huggingface/datasets/pull/344 | 651,495,246 | MDExOlB1bGxSZXF1ZXN0NDQ0NzQwMTIw | 344 | Search qa | [] | closed | false | null | 1 | 2020-07-06T12:23:16Z | 2020-07-16T08:58:16Z | 2020-07-16T08:58:16Z | null | This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/344/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/344",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/344"
} | true | [
"Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5075/comments | https://api.github.com/repos/huggingface/datasets/issues/5075/events | https://github.com/huggingface/datasets/issues/5075 | 1,397,865,501 | I_kwDODunzps5TUbwd | 5,075 | Throw EnvironmentError when token is not present | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] | closed | false | null | 1 | 2022-10-05T14:14:18Z | 2022-10-07T14:33:28Z | 2022-10-07T14:33:28Z | null | Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5075/timeline | null | completed | null | null | false | [
"@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks."
] |
https://api.github.com/repos/huggingface/datasets/issues/6076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6076/comments | https://api.github.com/repos/huggingface/datasets/issues/6076/events | https://github.com/huggingface/datasets/pull/6076 | 1,822,345,597 | PR_kwDODunzps5WcGVR | 6,076 | No gzip encoding from github | [] | closed | false | null | 3 | 2023-07-26T12:46:07Z | 2023-07-27T16:15:11Z | 2023-07-27T16:14:40Z | null | Don't accept gzip encoding from github, otherwise some files are not streamable + seekable.
fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84
and making sure https://github.com/huggingface/datasets/issues/2918 works as well | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6076/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6076",
"merged_at": "2023-07-27T16:14:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6076"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008191 / 0.011353 (-0.003162) | 0.004669 / 0.011008 (-0.006339) | 0.101315 / 0.038508 (0.062807) | 0.090235 / 0.023109 (0.067126) | 0.381265 / 0.275898 (0.105367) | 0.418266 / 0.323480 (0.094786) | 0.006292 / 0.007986 (-0.001693) | 0.003979 / 0.004328 (-0.000349) | 0.075946 / 0.004250 (0.071696) | 0.070678 / 0.037052 (0.033625) | 0.378006 / 0.258489 (0.119517) | 0.425825 / 0.293841 (0.131984) | 0.036325 / 0.128546 (-0.092221) | 0.009814 / 0.075646 (-0.065832) | 0.345687 / 0.419271 (-0.073584) | 0.063846 / 0.043533 (0.020313) | 0.386003 / 0.255139 (0.130864) | 0.400875 / 0.283200 (0.117675) | 0.027806 / 0.141683 (-0.113877) | 1.814810 / 1.452155 (0.362655) | 1.879897 / 1.492716 (0.387180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218684 / 0.018006 (0.200677) | 0.501715 / 0.000490 (0.501225) | 0.004808 / 0.000200 (0.004608) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035494 / 0.037411 (-0.001917) | 0.100949 / 0.014526 (0.086423) | 0.114639 / 0.176557 (-0.061917) | 0.188908 / 0.737135 (-0.548227) | 0.115794 / 0.296338 (-0.180545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462537 / 0.215209 (0.247328) | 4.612469 / 2.077655 (2.534814) | 2.298065 / 1.504120 (0.793945) | 2.088738 / 1.541195 (0.547543) | 2.188072 / 1.468490 (0.719582) | 0.565412 / 4.584777 (-4.019364) | 4.180394 / 3.745712 (0.434681) | 3.848696 / 5.269862 (-1.421165) | 2.391381 / 4.565676 (-2.174296) | 0.067647 / 0.424275 (-0.356628) | 0.008847 / 0.007607 (0.001240) | 0.553288 / 0.226044 (0.327243) | 5.517962 / 2.268929 (3.249033) | 2.866622 / 55.444624 (-52.578002) | 2.439025 / 6.876477 (-4.437452) | 2.740156 / 2.142072 (0.598084) | 0.694796 / 4.805227 (-4.110431) | 0.159022 / 6.500664 (-6.341642) | 0.074471 / 0.075469 (-0.000998) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.534979 / 1.841788 (-0.306808) | 23.297273 / 8.074308 (15.222965) | 16.859178 / 10.191392 (6.667786) | 0.207594 / 0.680424 (-0.472830) | 0.021990 / 0.534201 (-0.512211) | 0.472059 / 0.579283 (-0.107224) | 0.497632 / 0.434364 (0.063268) | 0.565672 / 0.540337 (0.025335) | 0.772485 / 1.386936 (-0.614451) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007777 / 0.011353 (-0.003576) | 0.004679 / 0.011008 (-0.006329) | 0.077317 / 0.038508 (0.038809) | 0.087433 / 0.023109 (0.064324) | 0.437389 / 0.275898 (0.161491) | 0.479562 / 0.323480 (0.156082) | 0.006137 / 0.007986 (-0.001849) | 0.003938 / 0.004328 (-0.000390) | 0.074769 / 0.004250 (0.070518) | 0.066605 / 0.037052 (0.029553) | 0.454865 / 0.258489 (0.196376) | 0.485103 / 0.293841 (0.191262) | 0.036540 / 0.128546 (-0.092006) | 0.009983 / 0.075646 (-0.065664) | 0.083566 / 0.419271 (-0.335706) | 0.059527 / 0.043533 (0.015994) | 0.449154 / 0.255139 (0.194015) | 0.462542 / 0.283200 (0.179342) | 0.027581 / 0.141683 (-0.114102) | 1.776720 / 1.452155 (0.324565) | 1.847920 / 1.492716 (0.355204) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246792 / 0.018006 (0.228786) | 0.494513 / 0.000490 (0.494024) | 0.004376 / 0.000200 (0.004176) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037837 / 0.037411 (0.000426) | 0.112752 / 0.014526 (0.098226) | 0.121742 / 0.176557 (-0.054815) | 0.189365 / 0.737135 (-0.547770) | 0.124366 / 0.296338 (-0.171973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492890 / 0.215209 (0.277681) | 4.920270 / 2.077655 (2.842615) | 2.565350 / 1.504120 (1.061230) | 2.378679 / 1.541195 (0.837484) | 2.483794 / 1.468490 (1.015304) | 0.579623 / 4.584777 (-4.005154) | 4.195924 / 3.745712 (0.450212) | 3.903382 / 5.269862 (-1.366479) | 2.466884 / 4.565676 (-2.098793) | 0.064145 / 0.424275 (-0.360130) | 0.008695 / 0.007607 (0.001088) | 0.579300 / 0.226044 (0.353256) | 5.809064 / 2.268929 (3.540136) | 3.145393 / 55.444624 (-52.299232) | 2.832760 / 6.876477 (-4.043717) | 3.020460 / 2.142072 (0.878388) | 0.700235 / 4.805227 (-4.104992) | 0.161262 / 6.500664 (-6.339402) | 0.076484 / 0.075469 (0.001015) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.606504 / 1.841788 (-0.235284) | 23.747863 / 8.074308 (15.673555) | 17.281712 / 10.191392 (7.090320) | 0.203874 / 0.680424 (-0.476550) | 0.021839 / 0.534201 (-0.512362) | 0.472365 / 0.579283 (-0.106918) | 0.475150 / 0.434364 (0.040786) | 0.571713 / 0.540337 (0.031376) | 0.759210 / 1.386936 (-0.627726) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3a7fc003b1d181d8e8ece24d5ebd442ec5d6519 \"CML watermark\")\n",
"> Some questions: won't this have an impact on downloading time, once we do not longer compress the payload? What is the advantage of this approach over the one with block_size: 0?\r\n\r\nSurely, but this prevents random access which is needed at multiple places in the code (eg to check the compression type).\r\nGithub isn't a good place for big files anyway so we should be fine"
] |
https://api.github.com/repos/huggingface/datasets/issues/740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/740/comments | https://api.github.com/repos/huggingface/datasets/issues/740/events | https://github.com/huggingface/datasets/pull/740 | 723,047,958 | MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0 | 740 | Fix TREC urls | [] | closed | false | null | 0 | 2020-10-16T09:11:28Z | 2020-10-19T08:54:37Z | 2020-10-19T08:54:36Z | null | The old TREC urls are now redirections.
I updated the urls to the new ones, since we don't support redirections for downloads.
Fix #737 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/740/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/740.diff",
"html_url": "https://github.com/huggingface/datasets/pull/740",
"merged_at": "2020-10-19T08:54:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/740.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/740"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3071/comments | https://api.github.com/repos/huggingface/datasets/issues/3071/events | https://github.com/huggingface/datasets/issues/3071 | 1,024,893,493 | I_kwDODunzps49FqI1 | 3,071 | Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2021-10-13T07:32:10Z | 2021-10-13T08:27:04Z | 2021-10-13T08:27:03Z | null | ## Adding a Dataset
- **Name:** text, json, csv
- **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3071/timeline | null | completed | null | null | false | [
"Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/5158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5158/comments | https://api.github.com/repos/huggingface/datasets/issues/5158/events | https://github.com/huggingface/datasets/issues/5158 | 1,422,059,287 | I_kwDODunzps5UwucX | 5,158 | Fix language and license tag names in all Hub datasets | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 6 | 2022-10-25T08:19:29Z | 2022-10-25T11:27:26Z | 2022-10-25T10:42:19Z | null | While working on this:
- #5137
we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license").
This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown.
We should fix the "language" and "license" tag names in all Hub datasets.
TODO:
- [x] Fix language and license tag names in 402 Hub datasets
CC: @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5158/timeline | null | completed | null | null | false | [
"There are currently 402 datasets with deprecated \"languages\" or \"licenses\".",
"hey @albertvillanova ,i would love to work on this issue if you like.",
"Hi @ayushthe1, thanks for your offer.\r\n\r\nBut as you can see, I self-assigned this issue.\r\n\r\nI have already fixed 200 out of the 402 datasets. My script is still running and fixing the rest.\r\n\r\nFor example: https://huggingface.co/datasets/fhamborg/news_sentiment_newsmtsc/discussions/2/files",
"Thanks for your time. Will try next time. 😇",
"@ayushthe1 feel free to take one of the non-assigned open issues: https://github.com/huggingface/datasets/issues",
"This is done."
] |
https://api.github.com/repos/huggingface/datasets/issues/2175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2175/comments | https://api.github.com/repos/huggingface/datasets/issues/2175/events | https://github.com/huggingface/datasets/issues/2175 | 851,836,096 | MDU6SXNzdWU4NTE4MzYwOTY= | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | [] | closed | false | null | 6 | 2021-04-06T21:50:49Z | 2021-04-16T12:21:16Z | 2021-04-16T12:21:15Z | null | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker.
![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png)
Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ?
Is this a problem of the index, where the faiss can't find any similar vector?
Is there documentation on the output index being -1?
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2175/timeline | null | completed | null | null | false | [
"Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.",
"@lhoestq @patrickvonplaten \r\n\r\nI also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.\r\n\r\nplease check [def get_doc_dicts function](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L222)\r\n\r\n\r\nDoes the use of the HNSW guarantee to retrieve valid indexes always? \r\n\r\n",
"Hi !\r\nNo it happens sometimes to return -1, especially if your dataset is small.\r\nIf your dataset is big enough it shouldn't happen in my experience.\r\n\r\nIdeally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code ",
"I also checked with some indexes it returns more -1s. Specially with IVF\nwhen nprobr is very low. It doesn't happen when using HNSW though. But at\nthe moment if it happens, dataset will always return the last element.\nMaybe we should change it to repeat the most last valid retrieved doc id.\nWhat do you think?\n\nOn Wed, Apr 7, 2021, 21:09 Quentin Lhoest ***@***.***> wrote:\n\n> Hi !\n> No it happens sometimes to return -1, especially if your dataset is small.\n> If your dataset is big enough it shouldn't happen.\n>\n> Ideally we should ignore all the -1 that are returned. It should be\n> possible to change that in RAG's code\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814746509>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTENOTLBEZTXEO2RS3THQOMPANCNFSM42PRVYDA>\n> .\n>\n",
"That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :)",
"Sure. Will push everything together with RAG end to end. :) thanks a lot.\n\nOn Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:\n\n> That would be an easy way to workaround this issue. Feel free to open a PR\n> on transformers and ping me ! :)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814752589>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWLROCGARKN7WOJYSTTHQPH5ANCNFSM42PRVYDA>\n> .\n>\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/835/comments | https://api.github.com/repos/huggingface/datasets/issues/835/events | https://github.com/huggingface/datasets/issues/835 | 740,102,210 | MDU6SXNzdWU3NDAxMDIyMTA= | 835 | Wikipedia postprocessing | [] | closed | false | null | 3 | 2020-11-10T17:26:38Z | 2020-11-10T18:23:20Z | 2020-11-10T17:49:21Z | null | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930.
Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World.
Politische Biografie
Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde.
mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917
[...]
```
so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup?
Apologies if this has been asked before. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/835/timeline | null | completed | null | null | false | [
"Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool",
"Ok, thanks! I'll try the Wiki40b dataset.",
"If anyone else is concerned about this, `wiki40b` does indeed seem very well cleaned."
] |
https://api.github.com/repos/huggingface/datasets/issues/3545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3545/comments | https://api.github.com/repos/huggingface/datasets/issues/3545/events | https://github.com/huggingface/datasets/pull/3545 | 1,096,189,889 | PR_kwDODunzps4wpziv | 3,545 | fix: 🐛 pass token when retrieving the split names | [] | closed | false | null | 3 | 2022-01-07T10:29:22Z | 2022-01-10T10:51:47Z | 2022-01-10T10:51:46Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3545/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3545",
"merged_at": "2022-01-10T10:51:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3545"
} | true | [
"Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (dataset viewer), I'd prefer to use a specific \"User Token Access\", with only the \"read\" role (https://huggingface.co/settings/token).\r\n\r\nSee https://github.com/huggingface/datasets-preview-backend/issues/74#issuecomment-1007316853 for the context",
"> Simply passing download_config is ok :)\r\n\r\nhmm, I prefer only passing use_auth_token. But the question is more: is it correct, in the (convoluted) case if `download_config.use_auth_token` exists and is different from `use_auth_token`? Which one should be used?",
"If both are passed, `use_auth_token` should have the priority (more specific parameters have the higher priority)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3459/comments | https://api.github.com/repos/huggingface/datasets/issues/3459/events | https://github.com/huggingface/datasets/issues/3459 | 1,084,969,672 | I_kwDODunzps5Aq1LI | 3,459 | dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-12-20T16:16:49Z | 2021-12-20T16:34:57Z | 2021-12-20T16:34:57Z | null | ## Describe the bug
When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset.
The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is.
However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner.
https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter
Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation.
I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices.
## Steps to reproduce the bug
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print("initial 10 elements")
print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
print("filtered 10 elements looking for label 0")
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1]
```
## Actual results
```
$ python indices_bug.py
initial 10 elements
[1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
filtered 10 elements looking for label 0
[1, 1, 1, 1, 1, 1]
```
This code block first shuffles the dataset (to get a mix of label 0 and label 1).
Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset.
Finally, a filter is applied to pull out just the elements with `label == 0`.
The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter.
In this case I have 2, shuffle and subset.
If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up.
The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results.
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
## Expected results
In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set.
If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected.
## Environment info
Here are the commands required to rebuild the conda environment from scratch.
```
# create a virtual environment
conda create -n dataset_indices python=3.8 -y
# activate the virtual environment
conda activate dataset_indices
# install huggingface datasets
conda install datasets
```
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 3.0.0
### Full Conda Environment
```
$ conda env export
name: dasaset_indices
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- abseil-cpp=20210324.2=h2531618_0
- aiohttp=3.8.1=py38h7f8727e_0
- aiosignal=1.2.0=pyhd3eb1b0_0
- arrow-cpp=3.0.0=py38h6b21186_4
- attrs=21.2.0=pyhd3eb1b0_0
- aws-c-common=0.4.57=he6710b0_1
- aws-c-event-stream=0.1.6=h2531618_5
- aws-checksums=0.1.9=he6710b0_0
- aws-sdk-cpp=1.8.185=hce553d0_0
- bcj-cffi=0.5.1=py38h295c915_0
- blas=1.0=mkl
- boost-cpp=1.73.0=h27cfd23_11
- bottleneck=1.3.2=py38heb32a55_1
- brotli=1.0.9=he6710b0_2
- brotli-python=1.0.9=py38heb0550a_2
- brotlicffi=1.0.9.2=py38h295c915_0
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2021.10.26=h06a4308_2
- certifi=2021.10.8=py38h06a4308_0
- cffi=1.14.6=py38h400218f_0
- conllu=4.4.1=pyhd3eb1b0_0
- cryptography=36.0.0=py38h9ce1e76_0
- dataclasses=0.8=pyh6d0b6a4_7
- dill=0.3.4=pyhd3eb1b0_0
- double-conversion=3.1.5=he6710b0_1
- et_xmlfile=1.1.0=py38h06a4308_0
- filelock=3.4.0=pyhd3eb1b0_0
- frozenlist=1.2.0=py38h7f8727e_0
- gflags=2.2.2=he6710b0_0
- glog=0.5.0=h2531618_0
- gmp=6.2.1=h2531618_2
- grpc-cpp=1.39.0=hae934f6_5
- huggingface_hub=0.0.17=pyhd3eb1b0_0
- icu=58.2=he6710b0_3
- idna=3.3=pyhd3eb1b0_0
- importlib-metadata=4.8.2=py38h06a4308_0
- importlib_metadata=4.8.2=hd3eb1b0_0
- intel-openmp=2021.4.0=h06a4308_3561
- krb5=1.19.2=hac12032_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libboost=1.73.0=h3ff78a5_11
- libcurl=7.80.0=h0b77cf5_0
- libedit=3.1.20210910=h7f8727e_0
- libev=4.33=h7f8727e_1
- libevent=2.1.8=h1ba5d50_1
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libnghttp2=1.46.0=hce63b2e_0
- libprotobuf=3.17.2=h4ff587b_1
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libthrift=0.14.2=hcc01f38_0
- libxml2=2.9.12=h03d6c58_0
- libxslt=1.1.34=hc22bd24_0
- lxml=4.6.3=py38h9120a33_0
- lz4-c=1.9.3=h295c915_1
- mkl=2021.4.0=h06a4308_640
- mkl-service=2.4.0=py38h7f8727e_0
- mkl_fft=1.3.1=py38hd3c417c_0
- mkl_random=1.2.2=py38h51133e4_0
- multiprocess=0.70.12.2=py38h7f8727e_0
- multivolumefile=0.2.3=pyhd3eb1b0_0
- ncurses=6.3=h7f8727e_2
- numexpr=2.7.3=py38h22e1b3c_1
- numpy=1.21.2=py38h20f2e39_0
- numpy-base=1.21.2=py38h79a1101_0
- openpyxl=3.0.9=pyhd3eb1b0_0
- openssl=1.1.1l=h7f8727e_0
- orc=1.6.9=ha97a36c_3
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.4=py38h06a4308_0
- py7zr=0.16.1=pyhd3eb1b0_1
- pycparser=2.21=pyhd3eb1b0_0
- pycryptodomex=3.10.1=py38h27cfd23_1
- pyopenssl=21.0.0=pyhd3eb1b0_1
- pyparsing=3.0.4=pyhd3eb1b0_0
- pyppmd=0.16.1=py38h295c915_0
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.12=h12debd9_0
- python-dateutil=2.8.2=pyhd3eb1b0_0
- python-xxhash=2.0.2=py38h7f8727e_0
- pyzstd=0.14.4=py38h7f8727e_3
- re2=2020.11.01=h2531618_1
- readline=8.1=h27cfd23_0
- requests=2.26.0=pyhd3eb1b0_0
- setuptools=58.0.4=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- snappy=1.1.8=he6710b0_0
- sqlite=3.36.0=hc218d9a_0
- texttable=1.6.4=pyhd3eb1b0_0
- tk=8.6.11=h1ccaba5_0
- typing_extensions=3.10.0.2=pyh06a4308_0
- uriparser=0.9.3=he6710b0_1
- utf8proc=2.6.1=h27cfd23_0
- wheel=0.37.0=pyhd3eb1b0_1
- xxhash=0.8.0=h7f8727e_3
- xz=5.2.5=h7b6447c_0
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.11=h7f8727e_4
- zstd=1.4.9=haebb681_0
- pip:
- async-timeout==4.0.2
- charset-normalizer==2.0.9
- datasets==1.16.1
- fsspec==2021.11.1
- huggingface-hub==0.2.1
- multidict==5.2.0
- pandas==1.3.5
- pyarrow==6.0.1
- pytz==2021.3
- pyyaml==6.0
- tqdm==4.62.3
- typing-extensions==4.0.1
- urllib3==1.26.7
- yarl==1.7.2
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3459/timeline | null | completed | null | null | false | [
"I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?",
"Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed."
] |
https://api.github.com/repos/huggingface/datasets/issues/1752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1752/comments | https://api.github.com/repos/huggingface/datasets/issues/1752/events | https://github.com/huggingface/datasets/pull/1752 | 789,822,459 | MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5 | 1,752 | COMET metric citation | [] | closed | false | null | 1 | 2021-01-20T09:54:43Z | 2021-01-20T10:27:07Z | 2021-01-20T10:25:02Z | null | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c85fdac2938.png">
This pull request is only intended to fix that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1752/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1752",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1752"
} | true | [
"I think its better to create a new branch with this fix. I forgot I was still using the old branch."
] |