url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.42B
1.84B
node_id
stringlengths
18
19
number
int64
5.16k
6.14k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
3
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6138/comments
https://api.github.com/repos/huggingface/datasets/issues/6138/events
https://github.com/huggingface/datasets/pull/6138
1,844,952,496
PR_kwDODunzps5XoH2V
6,138
Ignore CI lint rule violation in Pickler.memoize
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.003890 / 0.011008 (-0.007118) | 0.084044 / 0.038508 (0.045536) | 0.071893 / 0.023109 (0.048784) | 0.346926 / 0.275898 (0.071028) | 0.397487 / 0.323480 (0.074007) | 0.004065 / 0.007986 (-0.003921) | 0.003218 / 0.004328 (-0.001111) | 0.064670 / 0.004250 (0.060420) | 0.052414 / 0.037052 (0.015362) | 0.355413 / 0.258489 (0.096924) | 0.398894 / 0.293841 (0.105053) | 0.030763 / 0.128546 (-0.097783) | 0.008590 / 0.075646 (-0.067056) | 0.286857 / 0.419271 (-0.132415) | 0.051126 / 0.043533 (0.007593) | 0.346125 / 0.255139 (0.090986) | 0.395673 / 0.283200 (0.112474) | 0.025766 / 0.141683 (-0.115917) | 1.466238 / 1.452155 (0.014084) | 1.543117 / 1.492716 (0.050400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213210 / 0.018006 (0.195204) | 0.451981 / 0.000490 (0.451491) | 0.003784 / 0.000200 (0.003585) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027756 / 0.037411 (-0.009655) | 0.082446 / 0.014526 (0.067920) | 0.095414 / 0.176557 (-0.081142) | 0.151812 / 0.737135 (-0.585323) | 0.096296 / 0.296338 (-0.200042) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383729 / 0.215209 (0.168520) | 3.835126 / 2.077655 (1.757471) | 1.891972 / 1.504120 (0.387852) | 1.719934 / 1.541195 (0.178739) | 1.899980 / 1.468490 (0.431490) | 0.488741 / 4.584777 (-4.096036) | 3.634120 / 3.745712 (-0.111592) | 3.243314 / 5.269862 (-2.026547) | 2.028382 / 4.565676 (-2.537294) | 0.057355 / 0.424275 (-0.366920) | 0.007717 / 0.007607 (0.000110) | 0.459835 / 0.226044 (0.233790) | 4.591793 / 2.268929 (2.322864) | 2.346861 / 55.444624 (-53.097764) | 2.067357 / 6.876477 (-4.809120) | 2.254954 / 2.142072 (0.112882) | 0.587016 / 4.805227 (-4.218211) | 0.133918 / 6.500664 (-6.366746) | 0.060311 / 0.075469 (-0.015158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250016 / 1.841788 (-0.591772) | 19.674333 / 8.074308 (11.600025) | 14.522764 / 10.191392 (4.331372) | 0.145741 / 0.680424 (-0.534683) | 0.018593 / 0.534201 (-0.515608) | 0.392833 / 0.579283 (-0.186450) | 0.408194 / 0.434364 (-0.026170) | 0.455164 / 0.540337 (-0.085174) | 0.622722 / 1.386936 (-0.764214) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006583 / 0.011353 (-0.004770) | 0.004008 / 0.011008 (-0.007000) | 0.064688 / 0.038508 (0.026180) | 0.074969 / 0.023109 (0.051860) | 0.360504 / 0.275898 (0.084606) | 0.396926 / 0.323480 (0.073446) | 0.005190 / 0.007986 (-0.002796) | 0.003363 / 0.004328 (-0.000966) | 0.064372 / 0.004250 (0.060122) | 0.054428 / 0.037052 (0.017376) | 0.361204 / 0.258489 (0.102715) | 0.400917 / 0.293841 (0.107077) | 0.031117 / 0.128546 (-0.097429) | 0.008406 / 0.075646 (-0.067241) | 0.069655 / 0.419271 (-0.349617) | 0.048582 / 0.043533 (0.005049) | 0.365396 / 0.255139 (0.110257) | 0.381344 / 0.283200 (0.098145) | 0.023809 / 0.141683 (-0.117874) | 1.472926 / 1.452155 (0.020772) | 1.547298 / 1.492716 (0.054582) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276912 / 0.018006 (0.258906) | 0.449096 / 0.000490 (0.448607) | 0.018921 / 0.000200 (0.018721) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030237 / 0.037411 (-0.007174) | 0.088610 / 0.014526 (0.074084) | 0.101529 / 0.176557 (-0.075027) | 0.154070 / 0.737135 (-0.583065) | 0.103471 / 0.296338 (-0.192867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416047 / 0.215209 (0.200838) | 4.152374 / 2.077655 (2.074719) | 2.111181 / 1.504120 (0.607061) | 1.943582 / 1.541195 (0.402387) | 2.031729 / 1.468490 (0.563239) | 0.486740 / 4.584777 (-4.098037) | 3.631547 / 3.745712 (-0.114165) | 3.251202 / 5.269862 (-2.018660) | 2.041272 / 4.565676 (-2.524405) | 0.057287 / 0.424275 (-0.366988) | 0.007303 / 0.007607 (-0.000304) | 0.491027 / 0.226044 (0.264982) | 4.906757 / 2.268929 (2.637829) | 2.581694 / 55.444624 (-52.862931) | 2.250996 / 6.876477 (-4.625481) | 2.441771 / 2.142072 (0.299698) | 0.600714 / 4.805227 (-4.204514) | 0.133233 / 6.500664 (-6.367431) | 0.060856 / 0.075469 (-0.014613) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340062 / 1.841788 (-0.501725) | 19.973899 / 8.074308 (11.899591) | 14.347381 / 10.191392 (4.155989) | 0.166651 / 0.680424 (-0.513773) | 0.018691 / 0.534201 (-0.515510) | 0.393580 / 0.579283 (-0.185703) | 0.409425 / 0.434364 (-0.024939) | 0.474409 / 0.540337 (-0.065929) | 0.649423 / 1.386936 (-0.737514) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5da68102297c3639207a7901952d2765a4cdb8b \"CML watermark\")\n", "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6138). All of your documentation changes will be reflected on that endpoint." ]
2023-08-10T11:03:15
2023-08-10T11:10:42
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6138", "html_url": "https://github.com/huggingface/datasets/pull/6138", "diff_url": "https://github.com/huggingface/datasets/pull/6138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6138.patch", "merged_at": null }
This PR ignores the violation of the lint rule E721 in `Pickler.memoize`. The lint rule violation was introduced in this PR: - #3182 @lhoestq is there a reason you did not use `isinstance` instead? As a hotfix, we just ignore the violation of the lint rule. Fix #6136.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6138/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6137/comments
https://api.github.com/repos/huggingface/datasets/issues/6137/events
https://github.com/huggingface/datasets/issues/6137
1,844,952,312
I_kwDODunzps5t97z4
6,137
(`from_spark()`) Unable to connect HDFS in pyspark YARN setting
{ "login": "kyoungrok0517", "id": 1051900, "node_id": "MDQ6VXNlcjEwNTE5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoungrok0517", "html_url": "https://github.com/kyoungrok0517", "followers_url": "https://api.github.com/users/kyoungrok0517/followers", "following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}", "gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions", "organizations_url": "https://api.github.com/users/kyoungrok0517/orgs", "repos_url": "https://api.github.com/users/kyoungrok0517/repos", "events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoungrok0517/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T11:03:08
2023-08-10T11:03:08
null
NONE
null
null
null
### Describe the bug related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613 --- Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library that relies on pyarrow to communicate with HDFS. The `from_spark()` ([link](https://huggingface.co/docs/datasets/use_with_spark#load-from-spark)) is what I'm invoking in my script. Below is the error I'm encountering. Note that I've masked sensitive paths. My code is sent to worker containers (docker) from driver container then executed. I confirmed that in both driver and worker images I can connect to HDFS using pyarrow since the envs and required jars are properly set, but strangely that becomes impossible when the same image runs as remote worker process. These are some peculiarities in my environment that might caused this issue. * **Cluster requires kerberos authentication** * But I think the error message implies that's not the problem in this case * **The user that runs the worker process is different from that built the docker image** * To avoid permission-related issues I made all directories that are accessed from the script accessible to everyone * **Pyspark-part of my code has no problem interacting with HDFS.** * Even pyarrow doesn't experience problem when I run the code in interactive session of the same docker images (driver, worker) * The problem occurs only when it runs as cluster's worker runtime Hope I could get some help. Thanks. ```bash 2023-08-08 18:51:19,638 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-08-08 18:51:20,280 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 23/08/08 18:51:22 WARN TaskSetManager: Lost task 0.0 in stage 142.0 (TID 9732) (ac3bax2062.bdp.bdata.ai executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:24 WARN TaskSetManager: Lost task 0.1 in stage 142.0 (TID 9733) (ac3iax2079.bdp.bdata.ai executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:38 WARN TaskSetManager: Lost task 0.2 in stage 142.0 (TID 9734) (<MASKED> executor 4): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` ### Steps to reproduce the bug Use `from_spark()` function in pyspark YARN setting. I set `cache_dir` to HDFS path. ### Expected behavior Work as described in document ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6137/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6136/comments
https://api.github.com/repos/huggingface/datasets/issues/6136/events
https://github.com/huggingface/datasets/issues/6136
1,844,887,866
I_kwDODunzps5t9sE6
6,136
CI check_code_quality error: E721 Do not compare types, use `isinstance()`
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2023-08-10T10:19:50
2023-08-10T10:19:50
null
MEMBER
null
null
null
After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error: ``` src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6136/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6135/comments
https://api.github.com/repos/huggingface/datasets/issues/6135/events
https://github.com/huggingface/datasets/pull/6135
1,844,870,943
PR_kwDODunzps5Xn2AT
6,135
Remove unused allowed_extensions param
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6135). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009055 / 0.011353 (-0.002298) | 0.008835 / 0.011008 (-0.002173) | 0.117048 / 0.038508 (0.078540) | 0.096268 / 0.023109 (0.073159) | 0.474678 / 0.275898 (0.198780) | 0.550509 / 0.323480 (0.227029) | 0.005552 / 0.007986 (-0.002434) | 0.004315 / 0.004328 (-0.000013) | 0.094336 / 0.004250 (0.090086) | 0.061945 / 0.037052 (0.024892) | 0.461422 / 0.258489 (0.202933) | 0.521271 / 0.293841 (0.227430) | 0.049116 / 0.128546 (-0.079430) | 0.015007 / 0.075646 (-0.060639) | 0.414351 / 0.419271 (-0.004920) | 0.137520 / 0.043533 (0.093987) | 0.465627 / 0.255139 (0.210488) | 0.537244 / 0.283200 (0.254044) | 0.068577 / 0.141683 (-0.073106) | 1.921373 / 1.452155 (0.469219) | 2.506653 / 1.492716 (1.013937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273970 / 0.018006 (0.255963) | 0.750295 / 0.000490 (0.749805) | 0.004241 / 0.000200 (0.004041) | 0.000128 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033793 / 0.037411 (-0.003618) | 0.105562 / 0.014526 (0.091037) | 0.131771 / 0.176557 (-0.044786) | 0.196890 / 0.737135 (-0.540245) | 0.119842 / 0.296338 (-0.176496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634881 / 0.215209 (0.419672) | 6.069221 / 2.077655 (3.991566) | 2.678765 / 1.504120 (1.174646) | 2.460309 / 1.541195 (0.919114) | 2.517579 / 1.468490 (1.049089) | 0.869558 / 4.584777 (-3.715219) | 5.407686 / 3.745712 (1.661974) | 4.920687 / 5.269862 (-0.349175) | 3.130066 / 4.565676 (-1.435611) | 0.100337 / 0.424275 (-0.323938) | 0.009615 / 0.007607 (0.002008) | 0.745275 / 0.226044 (0.519231) | 7.577890 / 2.268929 (5.308962) | 3.607887 / 55.444624 (-51.836738) | 2.922211 / 6.876477 (-3.954266) | 3.205592 / 2.142072 (1.063519) | 1.052298 / 4.805227 (-3.752929) | 0.218798 / 6.500664 (-6.281866) | 0.082137 / 0.075469 (0.006667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696551 / 1.841788 (-0.145237) | 24.946074 / 8.074308 (16.871766) | 23.114202 / 10.191392 (12.922810) | 0.220498 / 0.680424 (-0.459925) | 0.029388 / 0.534201 (-0.504813) | 0.494721 / 0.579283 (-0.084562) | 0.603085 / 0.434364 (0.168722) | 0.573093 / 0.540337 (0.032756) | 0.784937 / 1.386936 (-0.601999) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009642 / 0.011353 (-0.001711) | 0.007551 / 0.011008 (-0.003457) | 0.085224 / 0.038508 (0.046716) | 0.099493 / 0.023109 (0.076384) | 0.503824 / 0.275898 (0.227926) | 0.546583 / 0.323480 (0.223103) | 0.006385 / 0.007986 (-0.001601) | 0.004751 / 0.004328 (0.000423) | 0.084699 / 0.004250 (0.080449) | 0.067875 / 0.037052 (0.030823) | 0.485313 / 0.258489 (0.226824) | 0.535808 / 0.293841 (0.241967) | 0.049935 / 0.128546 (-0.078611) | 0.014427 / 0.075646 (-0.061219) | 0.095531 / 0.419271 (-0.323741) | 0.068487 / 0.043533 (0.024954) | 0.502204 / 0.255139 (0.247065) | 0.514393 / 0.283200 (0.231193) | 0.037350 / 0.141683 (-0.104333) | 1.849380 / 1.452155 (0.397226) | 1.920151 / 1.492716 (0.427434) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298363 / 0.018006 (0.280357) | 0.651555 / 0.000490 (0.651065) | 0.005910 / 0.000200 (0.005710) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039170 / 0.037411 (0.001758) | 0.106436 / 0.014526 (0.091910) | 0.129880 / 0.176557 (-0.046677) | 0.185401 / 0.737135 (-0.551734) | 0.125732 / 0.296338 (-0.170607) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643248 / 0.215209 (0.428039) | 6.374807 / 2.077655 (4.297152) | 3.057296 / 1.504120 (1.553176) | 2.779534 / 1.541195 (1.238340) | 2.790165 / 1.468490 (1.321675) | 0.841580 / 4.584777 (-3.743197) | 5.371478 / 3.745712 (1.625766) | 4.973251 / 5.269862 (-0.296610) | 3.235817 / 4.565676 (-1.329860) | 0.097276 / 0.424275 (-0.326999) | 0.008840 / 0.007607 (0.001233) | 0.728678 / 0.226044 (0.502634) | 7.526382 / 2.268929 (5.257454) | 3.792550 / 55.444624 (-51.652074) | 3.439134 / 6.876477 (-3.437342) | 3.466626 / 2.142072 (1.324553) | 1.035894 / 4.805227 (-3.769333) | 0.211670 / 6.500664 (-6.288994) | 0.087596 / 0.075469 (0.012127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.782755 / 1.841788 (-0.059033) | 25.704407 / 8.074308 (17.630099) | 23.799672 / 10.191392 (13.608280) | 0.233952 / 0.680424 (-0.446472) | 0.030810 / 0.534201 (-0.503391) | 0.505857 / 0.579283 (-0.073426) | 0.629331 / 0.434364 (0.194967) | 0.608530 / 0.540337 (0.068192) | 0.813688 / 1.386936 (-0.573248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed4d6bb5f1331576c41b04acd9872a5349a0915c \"CML watermark\")\n" ]
2023-08-10T10:09:54
2023-08-10T10:22:54
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6135", "html_url": "https://github.com/huggingface/datasets/pull/6135", "diff_url": "https://github.com/huggingface/datasets/pull/6135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6135.patch", "merged_at": null }
This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6135/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6134/comments
https://api.github.com/repos/huggingface/datasets/issues/6134/events
https://github.com/huggingface/datasets/issues/6134
1,844,535,142
I_kwDODunzps5t8V9m
6,134
`datasets` cannot be installed alongside `apache-beam`
{ "login": "boyleconnor", "id": 6520892, "node_id": "MDQ6VXNlcjY1MjA4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boyleconnor", "html_url": "https://github.com/boyleconnor", "followers_url": "https://api.github.com/users/boyleconnor/followers", "following_url": "https://api.github.com/users/boyleconnor/following{/other_user}", "gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}", "starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions", "organizations_url": "https://api.github.com/users/boyleconnor/orgs", "repos_url": "https://api.github.com/users/boyleconnor/repos", "events_url": "https://api.github.com/users/boyleconnor/events{/privacy}", "received_events_url": "https://api.github.com/users/boyleconnor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T06:54:32
2023-08-10T06:55:46
null
NONE
null
null
null
### Describe the bug If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to something such as importing the `load_dataset` method from `datasets` results in a crashing error. I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions. ### Steps to reproduce the bug See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug. In some environments, I have been able to reproduce the bug by running the following in Bash: ```bash $ pip install datasets apache-beam ``` then the following in a Python shell: ```python from datasets import load_dataset ``` Here is my stacktrace from running on Google Colab: <details> <summary>stacktrace</summary> ``` [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.14.4" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 64 65 from . import config ---> 66 from .arrow_reader import ArrowReader 67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 68 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 28 import pyarrow.parquet as pq 29 ---> 30 from .download.download_config import DownloadConfig 31 from .naming import _split_re, filenames_for_dataset_split 32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables [/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module> 7 8 from .download_config import DownloadConfig ----> 9 from .download_manager import DownloadManager, DownloadMode 10 from .streaming_download_manager import StreamingDownloadManager [/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module> 33 from ..utils.info_utils import get_size_checksum_dict 34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm ---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str 36 from .download_config import DownloadConfig 37 [/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module> 38 import dill 39 import multiprocess ---> 40 import multiprocess.pool 41 import numpy as np 42 from packaging import version [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module> 607 # 608 --> 609 class ThreadPool(Pool): 610 611 from .dummy import Process [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool() 609 class ThreadPool(Pool): 610 --> 611 from .dummy import Process 612 613 def __init__(self, processes=None, initializer=None, initargs=()): [/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module> 85 # 86 ---> 87 class Condition(threading._Condition): 88 # XXX 89 if sys.version_info < (3, 0): AttributeError: module 'threading' has no attribute '_Condition' ``` </details> I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes the installer to hang indefinitely. ### Expected behavior I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`. ### Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6134/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6133/comments
https://api.github.com/repos/huggingface/datasets/issues/6133/events
https://github.com/huggingface/datasets/issues/6133
1,844,511,519
I_kwDODunzps5t8QMf
6,133
Dataset is slower after calling `to_iterable_dataset`
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T06:36:23
2023-08-10T06:36:23
null
NONE
null
null
null
### Describe the bug Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset` ### Steps to reproduce the bug Any dataset after converting to `IterableDataset` ### Expected behavior Maybe it should be faster on big dataset? I only test on small dataset ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6133/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6132/comments
https://api.github.com/repos/huggingface/datasets/issues/6132/events
https://github.com/huggingface/datasets/issues/6132
1,843,491,020
I_kwDODunzps5t4XDM
6,132
to_iterable_dataset is missing in document
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-09T15:15:03
2023-08-09T15:15:03
null
NONE
null
null
null
### Describe the bug to_iterable_dataset is missing in document ### Steps to reproduce the bug to_iterable_dataset is missing in document ### Expected behavior document enhancement ### Environment info unrelated
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6132/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6131/comments
https://api.github.com/repos/huggingface/datasets/issues/6131/events
https://github.com/huggingface/datasets/issues/6131
1,843,448,643
I_kwDODunzps5t4MtD
6,131
AttributeError: type object 'tqdm' has no attribute '_lock'
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-09T14:53:31
2023-08-09T14:54:36
null
CONTRIBUTOR
null
null
null
### Describe the bug Getting a tqdm issue when writing a Dask dataframe to the hub. Similar to #6066. Using latest Datasets version doesn't seem to resolve it ### Steps to reproduce the bug This is a minimal reproducer: ``` import dask.dataframe as dd import pandas as pd import random import huggingface_hub data = {"number": [random.randint(0,10) for _ in range(1000)]} df = pd.DataFrame.from_dict(data) dataframe = dd.from_pandas(df, npartitions=1) dataframe = dataframe.repartition(npartitions=2) repo_id = "nielsr/test-dask" repo_path = f"hf://datasets/{repo_id}" huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True) dd.to_parquet(dataframe, path=f"{repo_path}/data") ``` Note: I'm intentionally repartioning the Dask dataframe to 2 partitions, as it does work when only having one partition. ### Expected behavior Would expect to write to the hub without any problem. ### Environment info Datasets version 2.14.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6131/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6130/comments
https://api.github.com/repos/huggingface/datasets/issues/6130/events
https://github.com/huggingface/datasets/issues/6130
1,843,158,846
I_kwDODunzps5t3F8-
6,130
default config name doesn't work when config kwargs are specified.
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-09T12:43:15
2023-08-09T12:43:15
null
NONE
null
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522 If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs. ### Steps to reproduce the bug ```python import datasets datasets.load_dataset('/dataset/with/multiple/config'') # Ok datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err ``` ### Expected behavior Default config behavior should be consistent. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6130/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6129/comments
https://api.github.com/repos/huggingface/datasets/issues/6129/events
https://github.com/huggingface/datasets/pull/6129
1,841,563,517
PR_kwDODunzps5Xcmuw
6,129
Release 2.14.4
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006053 / 0.011353 (-0.005299) | 0.003532 / 0.011008 (-0.007476) | 0.081930 / 0.038508 (0.043422) | 0.059043 / 0.023109 (0.035934) | 0.322785 / 0.275898 (0.046887) | 0.378158 / 0.323480 (0.054678) | 0.004709 / 0.007986 (-0.003277) | 0.002907 / 0.004328 (-0.001421) | 0.061516 / 0.004250 (0.057266) | 0.047209 / 0.037052 (0.010157) | 0.346885 / 0.258489 (0.088396) | 0.381011 / 0.293841 (0.087170) | 0.027491 / 0.128546 (-0.101055) | 0.008014 / 0.075646 (-0.067632) | 0.260663 / 0.419271 (-0.158608) | 0.045427 / 0.043533 (0.001894) | 0.315277 / 0.255139 (0.060138) | 0.377902 / 0.283200 (0.094703) | 0.021371 / 0.141683 (-0.120311) | 1.416350 / 1.452155 (-0.035804) | 1.483345 / 1.492716 (-0.009372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203660 / 0.018006 (0.185654) | 0.569081 / 0.000490 (0.568591) | 0.002742 / 0.000200 (0.002542) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023456 / 0.037411 (-0.013955) | 0.073954 / 0.014526 (0.059428) | 0.082991 / 0.176557 (-0.093566) | 0.144781 / 0.737135 (-0.592354) | 0.083346 / 0.296338 (-0.212992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391542 / 0.215209 (0.176333) | 3.909505 / 2.077655 (1.831850) | 1.862234 / 1.504120 (0.358114) | 1.676076 / 1.541195 (0.134881) | 1.727595 / 1.468490 (0.259105) | 0.501769 / 4.584777 (-4.083008) | 3.083697 / 3.745712 (-0.662016) | 2.819751 / 5.269862 (-2.450111) | 1.867265 / 4.565676 (-2.698411) | 0.057575 / 0.424275 (-0.366700) | 0.006478 / 0.007607 (-0.001129) | 0.466684 / 0.226044 (0.240640) | 4.657982 / 2.268929 (2.389054) | 2.347052 / 55.444624 (-53.097573) | 1.964688 / 6.876477 (-4.911789) | 2.077821 / 2.142072 (-0.064252) | 0.590591 / 4.805227 (-4.214636) | 0.124585 / 6.500664 (-6.376079) | 0.059468 / 0.075469 (-0.016001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223484 / 1.841788 (-0.618304) | 18.104638 / 8.074308 (10.030330) | 13.755126 / 10.191392 (3.563734) | 0.143158 / 0.680424 (-0.537266) | 0.017147 / 0.534201 (-0.517054) | 0.337427 / 0.579283 (-0.241856) | 0.352270 / 0.434364 (-0.082094) | 0.383718 / 0.540337 (-0.156619) | 0.534973 / 1.386936 (-0.851963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006039 / 0.011353 (-0.005314) | 0.003735 / 0.011008 (-0.007274) | 0.061954 / 0.038508 (0.023446) | 0.061786 / 0.023109 (0.038677) | 0.429420 / 0.275898 (0.153522) | 0.457629 / 0.323480 (0.134149) | 0.004748 / 0.007986 (-0.003237) | 0.002843 / 0.004328 (-0.001485) | 0.061811 / 0.004250 (0.057560) | 0.048740 / 0.037052 (0.011687) | 0.430066 / 0.258489 (0.171577) | 0.465971 / 0.293841 (0.172130) | 0.027577 / 0.128546 (-0.100969) | 0.007981 / 0.075646 (-0.067665) | 0.067580 / 0.419271 (-0.351692) | 0.042058 / 0.043533 (-0.001475) | 0.428412 / 0.255139 (0.173273) | 0.451054 / 0.283200 (0.167855) | 0.020850 / 0.141683 (-0.120833) | 1.453907 / 1.452155 (0.001752) | 1.509914 / 1.492716 (0.017197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237713 / 0.018006 (0.219707) | 0.418064 / 0.000490 (0.417575) | 0.006411 / 0.000200 (0.006211) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024950 / 0.037411 (-0.012462) | 0.076806 / 0.014526 (0.062281) | 0.085237 / 0.176557 (-0.091320) | 0.137940 / 0.737135 (-0.599196) | 0.086266 / 0.296338 (-0.210072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418666 / 0.215209 (0.203457) | 4.160547 / 2.077655 (2.082893) | 2.135671 / 1.504120 (0.631551) | 1.964985 / 1.541195 (0.423790) | 2.009447 / 1.468490 (0.540957) | 0.501377 / 4.584777 (-4.083400) | 3.064293 / 3.745712 (-0.681419) | 2.827153 / 5.269862 (-2.442709) | 1.854698 / 4.565676 (-2.710978) | 0.057662 / 0.424275 (-0.366613) | 0.006829 / 0.007607 (-0.000778) | 0.496730 / 0.226044 (0.270686) | 4.964663 / 2.268929 (2.695735) | 2.583133 / 55.444624 (-52.861491) | 2.329700 / 6.876477 (-4.546776) | 2.415521 / 2.142072 (0.273449) | 0.591973 / 4.805227 (-4.213255) | 0.126801 / 6.500664 (-6.373863) | 0.062811 / 0.075469 (-0.012659) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348575 / 1.841788 (-0.493212) | 18.282861 / 8.074308 (10.208553) | 13.734056 / 10.191392 (3.542664) | 0.154987 / 0.680424 (-0.525437) | 0.016996 / 0.534201 (-0.517205) | 0.335264 / 0.579283 (-0.244019) | 0.356907 / 0.434364 (-0.077456) | 0.399185 / 0.540337 (-0.141152) | 0.540209 / 1.386936 (-0.846727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#887bef1217e0f4441d57bf0f4d1e806df12f2c50 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006768 / 0.011353 (-0.004585) | 0.004250 / 0.011008 (-0.006758) | 0.086780 / 0.038508 (0.048272) | 0.080872 / 0.023109 (0.057762) | 0.309281 / 0.275898 (0.033383) | 0.352293 / 0.323480 (0.028814) | 0.005604 / 0.007986 (-0.002382) | 0.003544 / 0.004328 (-0.000784) | 0.066910 / 0.004250 (0.062659) | 0.055568 / 0.037052 (0.018516) | 0.314931 / 0.258489 (0.056442) | 0.366026 / 0.293841 (0.072185) | 0.031247 / 0.128546 (-0.097300) | 0.008860 / 0.075646 (-0.066786) | 0.293210 / 0.419271 (-0.126061) | 0.052868 / 0.043533 (0.009335) | 0.316769 / 0.255139 (0.061630) | 0.352128 / 0.283200 (0.068929) | 0.025492 / 0.141683 (-0.116190) | 1.478379 / 1.452155 (0.026224) | 1.573652 / 1.492716 (0.080936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294975 / 0.018006 (0.276968) | 0.615093 / 0.000490 (0.614603) | 0.004279 / 0.000200 (0.004079) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031557 / 0.037411 (-0.005855) | 0.085026 / 0.014526 (0.070500) | 0.101221 / 0.176557 (-0.075336) | 0.157432 / 0.737135 (-0.579703) | 0.102350 / 0.296338 (-0.193988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384158 / 0.215209 (0.168949) | 3.826656 / 2.077655 (1.749001) | 1.873510 / 1.504120 (0.369390) | 1.721913 / 1.541195 (0.180718) | 1.848779 / 1.468490 (0.380289) | 0.485128 / 4.584777 (-4.099649) | 3.656660 / 3.745712 (-0.089052) | 3.441964 / 5.269862 (-1.827898) | 2.150611 / 4.565676 (-2.415066) | 0.056869 / 0.424275 (-0.367406) | 0.007382 / 0.007607 (-0.000225) | 0.458751 / 0.226044 (0.232707) | 4.585028 / 2.268929 (2.316099) | 2.439538 / 55.444624 (-53.005086) | 2.116959 / 6.876477 (-4.759518) | 2.459220 / 2.142072 (0.317147) | 0.580907 / 4.805227 (-4.224321) | 0.134502 / 6.500664 (-6.366162) | 0.062528 / 0.075469 (-0.012941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251006 / 1.841788 (-0.590782) | 20.755849 / 8.074308 (12.681541) | 14.456950 / 10.191392 (4.265558) | 0.167074 / 0.680424 (-0.513350) | 0.018482 / 0.534201 (-0.515719) | 0.395867 / 0.579283 (-0.183416) | 0.415620 / 0.434364 (-0.018744) | 0.462247 / 0.540337 (-0.078090) | 0.645762 / 1.386936 (-0.741174) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007050 / 0.011353 (-0.004303) | 0.004421 / 0.011008 (-0.006587) | 0.065312 / 0.038508 (0.026804) | 0.089790 / 0.023109 (0.066681) | 0.366318 / 0.275898 (0.090420) | 0.403542 / 0.323480 (0.080062) | 0.005695 / 0.007986 (-0.002290) | 0.003642 / 0.004328 (-0.000687) | 0.064540 / 0.004250 (0.060289) | 0.060933 / 0.037052 (0.023881) | 0.369004 / 0.258489 (0.110515) | 0.408056 / 0.293841 (0.114215) | 0.032124 / 0.128546 (-0.096422) | 0.008960 / 0.075646 (-0.066686) | 0.071267 / 0.419271 (-0.348005) | 0.049745 / 0.043533 (0.006212) | 0.367203 / 0.255139 (0.112064) | 0.383009 / 0.283200 (0.099809) | 0.025330 / 0.141683 (-0.116353) | 1.518290 / 1.452155 (0.066135) | 1.581738 / 1.492716 (0.089022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.338281 / 0.018006 (0.320275) | 0.538195 / 0.000490 (0.537706) | 0.008498 / 0.000200 (0.008298) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033279 / 0.037411 (-0.004133) | 0.093233 / 0.014526 (0.078707) | 0.106019 / 0.176557 (-0.070538) | 0.161262 / 0.737135 (-0.575874) | 0.109935 / 0.296338 (-0.186404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411563 / 0.215209 (0.196354) | 4.102149 / 2.077655 (2.024495) | 2.108513 / 1.504120 (0.604393) | 1.945344 / 1.541195 (0.404150) | 2.066964 / 1.468490 (0.598474) | 0.482771 / 4.584777 (-4.102006) | 3.659160 / 3.745712 (-0.086552) | 3.420833 / 5.269862 (-1.849029) | 2.147276 / 4.565676 (-2.418400) | 0.056957 / 0.424275 (-0.367318) | 0.007898 / 0.007607 (0.000290) | 0.482401 / 0.226044 (0.256357) | 4.821044 / 2.268929 (2.552115) | 2.567993 / 55.444624 (-52.876631) | 2.336165 / 6.876477 (-4.540312) | 2.545066 / 2.142072 (0.402994) | 0.580888 / 4.805227 (-4.224339) | 0.134092 / 6.500664 (-6.366572) | 0.062681 / 0.075469 (-0.012788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.379124 / 1.841788 (-0.462664) | 21.627949 / 8.074308 (13.553641) | 15.064818 / 10.191392 (4.873426) | 0.169707 / 0.680424 (-0.510716) | 0.018671 / 0.534201 (-0.515530) | 0.400496 / 0.579283 (-0.178787) | 0.415542 / 0.434364 (-0.018822) | 0.484351 / 0.540337 (-0.055986) | 0.646046 / 1.386936 (-0.740890) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004436 / 0.011008 (-0.006572) | 0.087422 / 0.038508 (0.048914) | 0.085996 / 0.023109 (0.062887) | 0.311772 / 0.275898 (0.035873) | 0.353281 / 0.323480 (0.029801) | 0.004562 / 0.007986 (-0.003423) | 0.003840 / 0.004328 (-0.000488) | 0.066500 / 0.004250 (0.062250) | 0.061293 / 0.037052 (0.024241) | 0.328840 / 0.258489 (0.070351) | 0.365587 / 0.293841 (0.071746) | 0.031802 / 0.128546 (-0.096744) | 0.008881 / 0.075646 (-0.066765) | 0.289671 / 0.419271 (-0.129601) | 0.053348 / 0.043533 (0.009816) | 0.307822 / 0.255139 (0.052683) | 0.342559 / 0.283200 (0.059360) | 0.025760 / 0.141683 (-0.115923) | 1.509944 / 1.452155 (0.057789) | 1.556634 / 1.492716 (0.063918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282036 / 0.018006 (0.264029) | 0.608350 / 0.000490 (0.607860) | 0.004843 / 0.000200 (0.004643) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029810 / 0.037411 (-0.007601) | 0.086215 / 0.014526 (0.071689) | 0.102200 / 0.176557 (-0.074356) | 0.158051 / 0.737135 (-0.579084) | 0.103083 / 0.296338 (-0.193255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392119 / 0.215209 (0.176910) | 3.895796 / 2.077655 (1.818141) | 1.921118 / 1.504120 (0.416998) | 1.754271 / 1.541195 (0.213076) | 1.880991 / 1.468490 (0.412501) | 0.481158 / 4.584777 (-4.103618) | 3.609210 / 3.745712 (-0.136502) | 3.412018 / 5.269862 (-1.857843) | 2.131710 / 4.565676 (-2.433967) | 0.057122 / 0.424275 (-0.367153) | 0.007444 / 0.007607 (-0.000163) | 0.468880 / 0.226044 (0.242835) | 4.682441 / 2.268929 (2.413512) | 2.505613 / 55.444624 (-52.939012) | 2.149655 / 6.876477 (-4.726822) | 2.465904 / 2.142072 (0.323832) | 0.578877 / 4.805227 (-4.226350) | 0.133504 / 6.500664 (-6.367160) | 0.061422 / 0.075469 (-0.014047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269395 / 1.841788 (-0.572393) | 21.107558 / 8.074308 (13.033250) | 15.318502 / 10.191392 (5.127110) | 0.165273 / 0.680424 (-0.515151) | 0.018783 / 0.534201 (-0.515418) | 0.396259 / 0.579283 (-0.183024) | 0.412907 / 0.434364 (-0.021457) | 0.465723 / 0.540337 (-0.074615) | 0.638414 / 1.386936 (-0.748522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007083 / 0.011353 (-0.004270) | 0.004216 / 0.011008 (-0.006793) | 0.065362 / 0.038508 (0.026854) | 0.095454 / 0.023109 (0.072345) | 0.364220 / 0.275898 (0.088322) | 0.417650 / 0.323480 (0.094170) | 0.006114 / 0.007986 (-0.001872) | 0.003577 / 0.004328 (-0.000751) | 0.064830 / 0.004250 (0.060579) | 0.062535 / 0.037052 (0.025483) | 0.381844 / 0.258489 (0.123355) | 0.418996 / 0.293841 (0.125155) | 0.031386 / 0.128546 (-0.097160) | 0.008913 / 0.075646 (-0.066733) | 0.070860 / 0.419271 (-0.348411) | 0.049132 / 0.043533 (0.005599) | 0.360406 / 0.255139 (0.105267) | 0.392407 / 0.283200 (0.109207) | 0.024611 / 0.141683 (-0.117072) | 1.509051 / 1.452155 (0.056896) | 1.570288 / 1.492716 (0.077572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368611 / 0.018006 (0.350605) | 0.537587 / 0.000490 (0.537098) | 0.028056 / 0.000200 (0.027856) | 0.000317 / 0.000054 (0.000262) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031570 / 0.037411 (-0.005841) | 0.088985 / 0.014526 (0.074460) | 0.105268 / 0.176557 (-0.071288) | 0.156724 / 0.737135 (-0.580412) | 0.105266 / 0.296338 (-0.191073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413861 / 0.215209 (0.198652) | 4.127001 / 2.077655 (2.049347) | 2.112114 / 1.504120 (0.607994) | 1.945200 / 1.541195 (0.404005) | 2.083031 / 1.468490 (0.614540) | 0.488086 / 4.584777 (-4.096691) | 3.565584 / 3.745712 (-0.180128) | 3.380782 / 5.269862 (-1.889079) | 2.103481 / 4.565676 (-2.462195) | 0.058203 / 0.424275 (-0.366072) | 0.007996 / 0.007607 (0.000389) | 0.487986 / 0.226044 (0.261941) | 4.871023 / 2.268929 (2.602095) | 2.584632 / 55.444624 (-52.859992) | 2.240103 / 6.876477 (-4.636374) | 2.555165 / 2.142072 (0.413092) | 0.591950 / 4.805227 (-4.213278) | 0.134919 / 6.500664 (-6.365745) | 0.062868 / 0.075469 (-0.012601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369731 / 1.841788 (-0.472057) | 21.497888 / 8.074308 (13.423580) | 14.555054 / 10.191392 (4.363662) | 0.168768 / 0.680424 (-0.511656) | 0.018837 / 0.534201 (-0.515364) | 0.394512 / 0.579283 (-0.184771) | 0.405459 / 0.434364 (-0.028905) | 0.475479 / 0.540337 (-0.064858) | 0.631994 / 1.386936 (-0.754942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002280) | 0.004894 / 0.011008 (-0.006114) | 0.108790 / 0.038508 (0.070282) | 0.081783 / 0.023109 (0.058674) | 0.381963 / 0.275898 (0.106064) | 0.450700 / 0.323480 (0.127220) | 0.006961 / 0.007986 (-0.001025) | 0.004035 / 0.004328 (-0.000293) | 0.081420 / 0.004250 (0.077169) | 0.058029 / 0.037052 (0.020976) | 0.437453 / 0.258489 (0.178964) | 0.472607 / 0.293841 (0.178766) | 0.048663 / 0.128546 (-0.079884) | 0.013512 / 0.075646 (-0.062134) | 0.406009 / 0.419271 (-0.013262) | 0.067616 / 0.043533 (0.024084) | 0.383641 / 0.255139 (0.128502) | 0.456734 / 0.283200 (0.173534) | 0.033391 / 0.141683 (-0.108292) | 1.753529 / 1.452155 (0.301375) | 1.859831 / 1.492716 (0.367115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215128 / 0.018006 (0.197122) | 0.538261 / 0.000490 (0.537771) | 0.005430 / 0.000200 (0.005230) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032664 / 0.037411 (-0.004748) | 0.093465 / 0.014526 (0.078939) | 0.106637 / 0.176557 (-0.069919) | 0.173642 / 0.737135 (-0.563494) | 0.113944 / 0.296338 (-0.182394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629212 / 0.215209 (0.414003) | 6.116729 / 2.077655 (4.039075) | 2.818000 / 1.504120 (1.313880) | 2.515317 / 1.541195 (0.974122) | 2.466588 / 1.468490 (0.998098) | 0.850815 / 4.584777 (-3.733962) | 5.051292 / 3.745712 (1.305579) | 4.472138 / 5.269862 (-0.797724) | 2.968317 / 4.565676 (-1.597360) | 0.100173 / 0.424275 (-0.324102) | 0.008407 / 0.007607 (0.000800) | 0.743972 / 0.226044 (0.517928) | 7.397619 / 2.268929 (5.128690) | 3.596681 / 55.444624 (-51.847943) | 2.854674 / 6.876477 (-4.021803) | 3.114274 / 2.142072 (0.972201) | 1.064879 / 4.805227 (-3.740348) | 0.215981 / 6.500664 (-6.284683) | 0.078159 / 0.075469 (0.002690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543291 / 1.841788 (-0.298497) | 23.244641 / 8.074308 (15.170333) | 20.784610 / 10.191392 (10.593218) | 0.222002 / 0.680424 (-0.458422) | 0.028584 / 0.534201 (-0.505617) | 0.478563 / 0.579283 (-0.100720) | 0.556101 / 0.434364 (0.121737) | 0.547446 / 0.540337 (0.007109) | 0.764318 / 1.386936 (-0.622618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.004925 / 0.011008 (-0.006083) | 0.078995 / 0.038508 (0.040487) | 0.092878 / 0.023109 (0.069769) | 0.485615 / 0.275898 (0.209717) | 0.532157 / 0.323480 (0.208677) | 0.008228 / 0.007986 (0.000243) | 0.004777 / 0.004328 (0.000449) | 0.076892 / 0.004250 (0.072642) | 0.066905 / 0.037052 (0.029853) | 0.465497 / 0.258489 (0.207008) | 0.520153 / 0.293841 (0.226312) | 0.047357 / 0.128546 (-0.081189) | 0.016870 / 0.075646 (-0.058776) | 0.090481 / 0.419271 (-0.328791) | 0.060774 / 0.043533 (0.017241) | 0.474368 / 0.255139 (0.219229) | 0.503981 / 0.283200 (0.220781) | 0.036025 / 0.141683 (-0.105658) | 1.769939 / 1.452155 (0.317784) | 1.851518 / 1.492716 (0.358802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265947 / 0.018006 (0.247941) | 0.532317 / 0.000490 (0.531828) | 0.004997 / 0.000200 (0.004797) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034112 / 0.037411 (-0.003299) | 0.102290 / 0.014526 (0.087764) | 0.109989 / 0.176557 (-0.066567) | 0.182813 / 0.737135 (-0.554323) | 0.111774 / 0.296338 (-0.184565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584893 / 0.215209 (0.369684) | 6.138505 / 2.077655 (4.060850) | 2.925761 / 1.504120 (1.421641) | 2.607320 / 1.541195 (1.066125) | 2.655827 / 1.468490 (1.187337) | 0.871140 / 4.584777 (-3.713637) | 5.051171 / 3.745712 (1.305459) | 4.708008 / 5.269862 (-0.561854) | 3.027485 / 4.565676 (-1.538191) | 0.100970 / 0.424275 (-0.323305) | 0.009640 / 0.007607 (0.002033) | 0.747818 / 0.226044 (0.521774) | 7.539930 / 2.268929 (5.271001) | 3.611693 / 55.444624 (-51.832931) | 2.924087 / 6.876477 (-3.952390) | 3.141993 / 2.142072 (0.999920) | 1.062921 / 4.805227 (-3.742306) | 0.213185 / 6.500664 (-6.287479) | 0.077146 / 0.075469 (0.001677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669182 / 1.841788 (-0.172606) | 23.810242 / 8.074308 (15.735934) | 21.220649 / 10.191392 (11.029257) | 0.212639 / 0.680424 (-0.467785) | 0.026705 / 0.534201 (-0.507496) | 0.469231 / 0.579283 (-0.110053) | 0.551672 / 0.434364 (0.117308) | 0.575043 / 0.540337 (0.034706) | 0.767511 / 1.386936 (-0.619425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n" ]
2023-08-08T15:43:56
2023-08-08T16:08:22
2023-08-08T15:49:06
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6129", "html_url": "https://github.com/huggingface/datasets/pull/6129", "diff_url": "https://github.com/huggingface/datasets/pull/6129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6129.patch", "merged_at": "2023-08-08T15:49:06" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6129/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6128/comments
https://api.github.com/repos/huggingface/datasets/issues/6128/events
https://github.com/huggingface/datasets/issues/6128
1,841,545,493
I_kwDODunzps5tw8EV
6,128
IndexError: Invalid key: 88 is out of bounds for size 0
{ "login": "TomasAndersonFang", "id": 38727343, "node_id": "MDQ6VXNlcjM4NzI3MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomasAndersonFang", "html_url": "https://github.com/TomasAndersonFang", "followers_url": "https://api.github.com/users/TomasAndersonFang/followers", "following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}", "gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions", "organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs", "repos_url": "https://api.github.com/users/TomasAndersonFang/repos", "events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}", "received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile", "> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 324, in _compile\r\n out_code = transform_code_object(code, transform)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py\", line 445, in transform_code_object\r\n transformations(instructions, code_options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 311, in transform\r\n tracer.run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1726, in run\r\n super().run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 576, in run\r\n and self.step()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 540, in step\r\n getattr(self, inst.opname)(inst)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1030, in LOAD_ATTR\r\n result = BuiltinVariable(getattr).call_function(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 566, in call_function\r\n result = handler(tx, *args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 931, in call_getattr\r\n return obj.var_getattr(tx, name).add_options(options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py\", line 124, in var_getattr\r\n subobj = inspect.getattr_static(base, name)\r\n File \"/apps/Arch/software/Python/3.10.8-GCCcore-12.2.0/lib/python3.10/inspect.py\", line 1777, in getattr_static\r\n raise AttributeError(attr)\r\nAttributeError: config\r\n\r\nfrom user code:\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/peft/peft_model.py\", line 909, in forward\r\n if self.base_model.config.model_type == \"mpt\":\r\n\r\nSet torch._dynamo.config.verbose=True for more information\r\n\r\n\r\nYou can suppress this exception and fall back to eager by setting:\r\n torch._dynamo.config.suppress_errors = True\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 228, in <module>\r\n main()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 221, in main\r\n trainer.train()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\n return inner_training_loop(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2654, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2679, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 82, in forward\r\n return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 209, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 581, in forward\r\n return model_forward(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 569, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 337, in catch_errors\r\n return callback(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 404, in _convert_frame\r\n result = inner_convert(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 104, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 262, in _convert_frame_assert\r\n return _compile(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/utils.py\", line 163, in time_wrapper\r\n r = func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 394, in _compile\r\n raise InternalTorchDynamoError() from e\r\ntorch._dynamo.exc.InternalTorchDynamoError\r\n```", "Hi @TomasAndersonFang,\r\n\r\nI guess in this case it may be an issue with `transformers` (or `PyTorch`). I would recommend you open an issue on their repo." ]
2023-08-08T15:32:08
2023-08-10T09:31:12
null
NONE
null
null
null
### Describe the bug This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib. ### Steps to reproduce the bug I use the following code to fine-tune Falcon on my private dataset. ```python import transformers from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoConfig, DataCollatorForSeq2Seq, Trainer, Seq2SeqTrainer, HfArgumentParser, Seq2SeqTrainingArguments, BitsAndBytesConfig, ) from peft import ( LoraConfig, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training, set_peft_model_state_dict, ) import torch import os import evaluate import functools from datasets import load_dataset import bitsandbytes as bnb import logging import json import copy from typing import Dict, Optional, Sequence from dataclasses import dataclass, field # Lora settings LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT= 0.05 LORA_TARGET_MODULES = ["query_key_value"] @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B") @dataclass class DataArguments: data_path: str = field(default=None, metadata={"help": "Path to the training data."}) train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."}) num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."}) @dataclass class TrainingArguments(transformers.TrainingArguments): # cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, ) is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."}) def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True): result = tokenizer( text, truncation=True, max_length=max_seq_len, padding=False, return_tensors=None, ) if ( result["input_ids"][-1] != tokenizer.eos_token_id and len(result["input_ids"]) < max_seq_len and add_eos_token ): result["input_ids"].append(tokenizer.eos_token_id) result["attention_mask"].append(1) if add_eos_token and len(result["input_ids"]) >= max_seq_len: result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id result["attention_mask"][max_seq_len - 1] = 1 result["labels"] = result["input_ids"].copy() return result def main(): parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() config = AutoConfig.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, trust_remote_code=True, ) if training_args.is_lora: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, torch_dtype=torch.float16, trust_remote_code=True, load_in_8bit=True, quantization_config=BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0 ), ) model = prepare_model_for_int8_training(model) config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=LORA_TARGET_MODULES, lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) else: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, torch_dtype=torch.float16, cache_dir=data_args.cache_path, trust_remote_code=True, ) model.config.use_cache = False def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) print_trainable_parameters(model) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, model_max_length=training_args.model_max_length, padding_side="left", use_fast=True, trust_remote_code=True, ) tokenizer.pad_token = tokenizer.eos_token # Load dataset def generate_and_tokenize_prompt(sample): input_text = sample["input"] target_text = sample["output"] + tokenizer.eos_token full_text = input_text + target_text tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512) tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512) input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:] return tokenized_full_text data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.eval_file is not None: data_files["eval"] = data_args.eval_file dataset = load_dataset(data_args.data_path, data_files=data_files) train_dataset = dataset["train"] eval_dataset = dataset["eval"] train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True) # Evaluation metrics def compute_metrics(eval_preds, tokenizer): metric = evaluate.load('exact_match') preds, labels = eval_preds # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Replace -100s in the labels as we can't decode them labels[labels == -100] = tokenizer.pad_token_id decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Some simple post-processing decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [label.strip() for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {'exact_match': result['exact_match']} compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer) model = torch.compile(model) # Training trainer = Trainer( model=model, train_dataset=train_dataset, eval_dataset=eval_dataset, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics_fn, ) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) tokenizer.save_pretrained(save_directory=training_args.output_dir) if __name__ == "__main__": main() ``` When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error: ``` Traceback (most recent call last): File "falcon_sft.py", line 230, in <module> main() File "falcon_sft.py", line 223, in main trainer.train() File "python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__ current_batch = next(dataloader_iter) File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = self.dataset.__getitems__(possibly_batched_index) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__ batch = self.__getitem__(keys) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table _check_valid_index_key(key, size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key _check_valid_index_key(int(max(key)), size=size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 88 is out of bounds for size 0 ``` So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`? ### Expected behavior I want to use `torch.compile` in my code. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6128/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6127/comments
https://api.github.com/repos/huggingface/datasets/issues/6127/events
https://github.com/huggingface/datasets/pull/6127
1,839,746,721
PR_kwDODunzps5XWdP5
6,127
Fix authentication issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006103 / 0.011353 (-0.005250) | 0.003588 / 0.011008 (-0.007420) | 0.080335 / 0.038508 (0.041827) | 0.059634 / 0.023109 (0.036525) | 0.356093 / 0.275898 (0.080195) | 0.407376 / 0.323480 (0.083896) | 0.005343 / 0.007986 (-0.002643) | 0.002928 / 0.004328 (-0.001400) | 0.062580 / 0.004250 (0.058330) | 0.047544 / 0.037052 (0.010491) | 0.364305 / 0.258489 (0.105816) | 0.421463 / 0.293841 (0.127623) | 0.027249 / 0.128546 (-0.101298) | 0.008010 / 0.075646 (-0.067636) | 0.262543 / 0.419271 (-0.156728) | 0.044978 / 0.043533 (0.001445) | 0.339344 / 0.255139 (0.084205) | 0.395288 / 0.283200 (0.112088) | 0.021425 / 0.141683 (-0.120258) | 1.439767 / 1.452155 (-0.012387) | 1.498081 / 1.492716 (0.005365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196976 / 0.018006 (0.178970) | 0.435383 / 0.000490 (0.434893) | 0.004559 / 0.000200 (0.004359) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023653 / 0.037411 (-0.013759) | 0.072944 / 0.014526 (0.058418) | 0.083651 / 0.176557 (-0.092906) | 0.144590 / 0.737135 (-0.592545) | 0.084844 / 0.296338 (-0.211494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398752 / 0.215209 (0.183543) | 3.959539 / 2.077655 (1.881884) | 1.935277 / 1.504120 (0.431157) | 1.751994 / 1.541195 (0.210799) | 1.828386 / 1.468490 (0.359896) | 0.500492 / 4.584777 (-4.084284) | 3.086630 / 3.745712 (-0.659082) | 2.851664 / 5.269862 (-2.418198) | 1.869792 / 4.565676 (-2.695885) | 0.058509 / 0.424275 (-0.365766) | 0.006500 / 0.007607 (-0.001107) | 0.467468 / 0.226044 (0.241424) | 4.686168 / 2.268929 (2.417240) | 2.427632 / 55.444624 (-53.016993) | 2.193194 / 6.876477 (-4.683283) | 2.408574 / 2.142072 (0.266501) | 0.592173 / 4.805227 (-4.213054) | 0.125381 / 6.500664 (-6.375283) | 0.060679 / 0.075469 (-0.014790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236066 / 1.841788 (-0.605722) | 18.591689 / 8.074308 (10.517381) | 14.138774 / 10.191392 (3.947382) | 0.147455 / 0.680424 (-0.532968) | 0.016921 / 0.534201 (-0.517280) | 0.328129 / 0.579283 (-0.251154) | 0.348872 / 0.434364 (-0.085491) | 0.380311 / 0.540337 (-0.160026) | 0.532901 / 1.386936 (-0.854035) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005438) | 0.003614 / 0.011008 (-0.007394) | 0.062857 / 0.038508 (0.024349) | 0.060633 / 0.023109 (0.037524) | 0.419684 / 0.275898 (0.143786) | 0.449025 / 0.323480 (0.125546) | 0.004595 / 0.007986 (-0.003391) | 0.002861 / 0.004328 (-0.001467) | 0.063253 / 0.004250 (0.059003) | 0.048770 / 0.037052 (0.011718) | 0.419838 / 0.258489 (0.161349) | 0.465183 / 0.293841 (0.171342) | 0.027350 / 0.128546 (-0.101196) | 0.008065 / 0.075646 (-0.067582) | 0.068321 / 0.419271 (-0.350950) | 0.041083 / 0.043533 (-0.002449) | 0.400831 / 0.255139 (0.145692) | 0.449286 / 0.283200 (0.166086) | 0.020472 / 0.141683 (-0.121210) | 1.437215 / 1.452155 (-0.014940) | 1.503679 / 1.492716 (0.010963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230764 / 0.018006 (0.212758) | 0.420774 / 0.000490 (0.420285) | 0.004012 / 0.000200 (0.003812) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026009 / 0.037411 (-0.011402) | 0.077943 / 0.014526 (0.063417) | 0.087281 / 0.176557 (-0.089276) | 0.139422 / 0.737135 (-0.597713) | 0.089090 / 0.296338 (-0.207248) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417298 / 0.215209 (0.202088) | 4.152303 / 2.077655 (2.074648) | 2.179996 / 1.504120 (0.675877) | 2.020619 / 1.541195 (0.479424) | 2.085241 / 1.468490 (0.616751) | 0.501111 / 4.584777 (-4.083666) | 3.079849 / 3.745712 (-0.665863) | 2.820607 / 5.269862 (-2.449255) | 1.863988 / 4.565676 (-2.701688) | 0.057662 / 0.424275 (-0.366613) | 0.006778 / 0.007607 (-0.000830) | 0.498661 / 0.226044 (0.272616) | 4.986503 / 2.268929 (2.717574) | 2.620676 / 55.444624 (-52.823949) | 2.297546 / 6.876477 (-4.578931) | 2.458148 / 2.142072 (0.316075) | 0.599490 / 4.805227 (-4.205738) | 0.125102 / 6.500664 (-6.375562) | 0.061411 / 0.075469 (-0.014059) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323816 / 1.841788 (-0.517971) | 18.462614 / 8.074308 (10.388306) | 13.845826 / 10.191392 (3.654434) | 0.146115 / 0.680424 (-0.534309) | 0.016862 / 0.534201 (-0.517339) | 0.335449 / 0.579283 (-0.243834) | 0.343792 / 0.434364 (-0.090572) | 0.394068 / 0.540337 (-0.146269) | 0.536378 / 1.386936 (-0.850558) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de3f00368c9236e9410821f5fddb95d6069883c1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006825 / 0.011353 (-0.004527) | 0.004005 / 0.011008 (-0.007003) | 0.085504 / 0.038508 (0.046996) | 0.077252 / 0.023109 (0.054143) | 0.351891 / 0.275898 (0.075993) | 0.383404 / 0.323480 (0.059924) | 0.004153 / 0.007986 (-0.003833) | 0.003344 / 0.004328 (-0.000985) | 0.064936 / 0.004250 (0.060685) | 0.057653 / 0.037052 (0.020601) | 0.368155 / 0.258489 (0.109666) | 0.406122 / 0.293841 (0.112282) | 0.032049 / 0.128546 (-0.096497) | 0.008698 / 0.075646 (-0.066949) | 0.292394 / 0.419271 (-0.126878) | 0.053634 / 0.043533 (0.010101) | 0.358273 / 0.255139 (0.103134) | 0.378441 / 0.283200 (0.095242) | 0.026928 / 0.141683 (-0.114755) | 1.458718 / 1.452155 (0.006563) | 1.536231 / 1.492716 (0.043515) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213956 / 0.018006 (0.195950) | 0.458620 / 0.000490 (0.458130) | 0.002718 / 0.000200 (0.002519) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083922 / 0.014526 (0.069396) | 0.152056 / 0.176557 (-0.024501) | 0.151584 / 0.737135 (-0.585552) | 0.095698 / 0.296338 (-0.200641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407762 / 0.215209 (0.192553) | 4.074324 / 2.077655 (1.996669) | 2.089929 / 1.504120 (0.585809) | 1.920024 / 1.541195 (0.378829) | 2.013410 / 1.468490 (0.544920) | 0.486056 / 4.584777 (-4.098721) | 3.656869 / 3.745712 (-0.088843) | 3.304008 / 5.269862 (-1.965854) | 2.074363 / 4.565676 (-2.491313) | 0.057293 / 0.424275 (-0.366982) | 0.007240 / 0.007607 (-0.000367) | 0.482696 / 0.226044 (0.256652) | 4.833251 / 2.268929 (2.564322) | 2.570391 / 55.444624 (-52.874233) | 2.220619 / 6.876477 (-4.655857) | 2.426316 / 2.142072 (0.284243) | 0.584811 / 4.805227 (-4.220416) | 0.134907 / 6.500664 (-6.365757) | 0.061115 / 0.075469 (-0.014354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251969 / 1.841788 (-0.589818) | 19.601611 / 8.074308 (11.527303) | 14.190217 / 10.191392 (3.998825) | 0.166296 / 0.680424 (-0.514128) | 0.018334 / 0.534201 (-0.515867) | 0.395172 / 0.579283 (-0.184111) | 0.410440 / 0.434364 (-0.023924) | 0.462263 / 0.540337 (-0.078074) | 0.645504 / 1.386936 (-0.741432) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004084 / 0.011008 (-0.006924) | 0.065208 / 0.038508 (0.026700) | 0.077809 / 0.023109 (0.054699) | 0.386472 / 0.275898 (0.110574) | 0.418686 / 0.323480 (0.095206) | 0.005346 / 0.007986 (-0.002640) | 0.003416 / 0.004328 (-0.000912) | 0.066209 / 0.004250 (0.061958) | 0.057517 / 0.037052 (0.020465) | 0.407684 / 0.258489 (0.149195) | 0.425438 / 0.293841 (0.131597) | 0.032166 / 0.128546 (-0.096380) | 0.008662 / 0.075646 (-0.066985) | 0.071712 / 0.419271 (-0.347560) | 0.049764 / 0.043533 (0.006231) | 0.394882 / 0.255139 (0.139743) | 0.403589 / 0.283200 (0.120389) | 0.023688 / 0.141683 (-0.117995) | 1.468488 / 1.452155 (0.016334) | 1.533118 / 1.492716 (0.040401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252949 / 0.018006 (0.234943) | 0.447355 / 0.000490 (0.446865) | 0.011721 / 0.000200 (0.011521) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031444 / 0.037411 (-0.005968) | 0.089390 / 0.014526 (0.074864) | 0.100103 / 0.176557 (-0.076454) | 0.153301 / 0.737135 (-0.583835) | 0.101336 / 0.296338 (-0.195003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408574 / 0.215209 (0.193365) | 4.073135 / 2.077655 (1.995480) | 2.086550 / 1.504120 (0.582430) | 1.930651 / 1.541195 (0.389457) | 2.013548 / 1.468490 (0.545058) | 0.477235 / 4.584777 (-4.107542) | 3.547545 / 3.745712 (-0.198167) | 3.321957 / 5.269862 (-1.947905) | 2.057705 / 4.565676 (-2.507971) | 0.056730 / 0.424275 (-0.367545) | 0.007882 / 0.007607 (0.000275) | 0.487297 / 0.226044 (0.261253) | 4.874184 / 2.268929 (2.605255) | 2.631129 / 55.444624 (-52.813496) | 2.235755 / 6.876477 (-4.640722) | 2.463329 / 2.142072 (0.321257) | 0.578308 / 4.805227 (-4.226919) | 0.132726 / 6.500664 (-6.367938) | 0.064883 / 0.075469 (-0.010586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347564 / 1.841788 (-0.494223) | 20.192973 / 8.074308 (12.118665) | 14.563553 / 10.191392 (4.372161) | 0.168244 / 0.680424 (-0.512180) | 0.018638 / 0.534201 (-0.515563) | 0.394789 / 0.579283 (-0.184494) | 0.419677 / 0.434364 (-0.014687) | 0.480274 / 0.540337 (-0.060063) | 0.641204 / 1.386936 (-0.745732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c7a0d56b60bf700d6a491fa30eaf66500969315 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005939 / 0.011353 (-0.005413) | 0.003457 / 0.011008 (-0.007551) | 0.079985 / 0.038508 (0.041477) | 0.056492 / 0.023109 (0.033383) | 0.312356 / 0.275898 (0.036458) | 0.354038 / 0.323480 (0.030558) | 0.004551 / 0.007986 (-0.003435) | 0.002828 / 0.004328 (-0.001501) | 0.062369 / 0.004250 (0.058119) | 0.044712 / 0.037052 (0.007660) | 0.318244 / 0.258489 (0.059755) | 0.361977 / 0.293841 (0.068136) | 0.026460 / 0.128546 (-0.102086) | 0.007928 / 0.075646 (-0.067719) | 0.261378 / 0.419271 (-0.157894) | 0.044209 / 0.043533 (0.000676) | 0.313931 / 0.255139 (0.058792) | 0.339553 / 0.283200 (0.056354) | 0.019776 / 0.141683 (-0.121907) | 1.443126 / 1.452155 (-0.009029) | 1.508149 / 1.492716 (0.015432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183801 / 0.018006 (0.165795) | 0.427967 / 0.000490 (0.427477) | 0.002028 / 0.000200 (0.001828) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023697 / 0.037411 (-0.013715) | 0.072128 / 0.014526 (0.057602) | 0.083701 / 0.176557 (-0.092855) | 0.142821 / 0.737135 (-0.594315) | 0.082276 / 0.296338 (-0.214063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434427 / 0.215209 (0.219218) | 4.325962 / 2.077655 (2.248308) | 2.277115 / 1.504120 (0.772995) | 2.093736 / 1.541195 (0.552541) | 2.127984 / 1.468490 (0.659494) | 0.502336 / 4.584777 (-4.082441) | 3.023243 / 3.745712 (-0.722469) | 2.805154 / 5.269862 (-2.464708) | 1.821273 / 4.565676 (-2.744403) | 0.057480 / 0.424275 (-0.366795) | 0.006365 / 0.007607 (-0.001242) | 0.508258 / 0.226044 (0.282213) | 5.087950 / 2.268929 (2.819022) | 2.705029 / 55.444624 (-52.739596) | 2.378392 / 6.876477 (-4.498085) | 2.515380 / 2.142072 (0.373307) | 0.589283 / 4.805227 (-4.215944) | 0.125719 / 6.500664 (-6.374945) | 0.061074 / 0.075469 (-0.014395) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221895 / 1.841788 (-0.619893) | 18.025917 / 8.074308 (9.951609) | 13.556901 / 10.191392 (3.365509) | 0.142614 / 0.680424 (-0.537809) | 0.016731 / 0.534201 (-0.517469) | 0.328374 / 0.579283 (-0.250910) | 0.342553 / 0.434364 (-0.091811) | 0.374502 / 0.540337 (-0.165836) | 0.534173 / 1.386936 (-0.852763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005817 / 0.011353 (-0.005536) | 0.003500 / 0.011008 (-0.007509) | 0.062240 / 0.038508 (0.023732) | 0.058128 / 0.023109 (0.035019) | 0.424014 / 0.275898 (0.148116) | 0.468453 / 0.323480 (0.144973) | 0.004641 / 0.007986 (-0.003345) | 0.002821 / 0.004328 (-0.001508) | 0.062180 / 0.004250 (0.057930) | 0.047578 / 0.037052 (0.010526) | 0.427367 / 0.258489 (0.168878) | 0.467889 / 0.293841 (0.174048) | 0.027144 / 0.128546 (-0.101403) | 0.007969 / 0.075646 (-0.067678) | 0.067764 / 0.419271 (-0.351508) | 0.040719 / 0.043533 (-0.002814) | 0.423663 / 0.255139 (0.168524) | 0.458556 / 0.283200 (0.175356) | 0.019196 / 0.141683 (-0.122487) | 1.471546 / 1.452155 (0.019392) | 1.547541 / 1.492716 (0.054825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228777 / 0.018006 (0.210770) | 0.406663 / 0.000490 (0.406173) | 0.003688 / 0.000200 (0.003488) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025494 / 0.037411 (-0.011917) | 0.076339 / 0.014526 (0.061814) | 0.084233 / 0.176557 (-0.092324) | 0.136995 / 0.737135 (-0.600140) | 0.085443 / 0.296338 (-0.210895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420441 / 0.215209 (0.205232) | 4.187018 / 2.077655 (2.109363) | 2.142139 / 1.504120 (0.638019) | 1.974530 / 1.541195 (0.433335) | 2.027321 / 1.468490 (0.558831) | 0.498116 / 4.584777 (-4.086661) | 2.988514 / 3.745712 (-0.757198) | 2.782046 / 5.269862 (-2.487816) | 1.821725 / 4.565676 (-2.743951) | 0.057711 / 0.424275 (-0.366564) | 0.006664 / 0.007607 (-0.000944) | 0.491015 / 0.226044 (0.264971) | 4.921037 / 2.268929 (2.652108) | 2.574964 / 55.444624 (-52.869661) | 2.251703 / 6.876477 (-4.624774) | 2.361154 / 2.142072 (0.219082) | 0.593362 / 4.805227 (-4.211865) | 0.126107 / 6.500664 (-6.374557) | 0.061840 / 0.075469 (-0.013630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327459 / 1.841788 (-0.514328) | 18.062960 / 8.074308 (9.988652) | 13.669253 / 10.191392 (3.477861) | 0.130719 / 0.680424 (-0.549705) | 0.016564 / 0.534201 (-0.517637) | 0.335821 / 0.579283 (-0.243462) | 0.341691 / 0.434364 (-0.092673) | 0.392651 / 0.540337 (-0.147686) | 0.529650 / 1.386936 (-0.857286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c65806b0542996e56825ab46a3ce8f9c07ab0df3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009625 / 0.011353 (-0.001728) | 0.005354 / 0.011008 (-0.005654) | 0.114350 / 0.038508 (0.075842) | 0.086637 / 0.023109 (0.063528) | 0.465381 / 0.275898 (0.189483) | 0.490411 / 0.323480 (0.166931) | 0.006575 / 0.007986 (-0.001411) | 0.004287 / 0.004328 (-0.000041) | 0.093134 / 0.004250 (0.088884) | 0.060209 / 0.037052 (0.023156) | 0.459570 / 0.258489 (0.201080) | 0.523320 / 0.293841 (0.229479) | 0.047943 / 0.128546 (-0.080603) | 0.014764 / 0.075646 (-0.060882) | 0.383887 / 0.419271 (-0.035384) | 0.069864 / 0.043533 (0.026331) | 0.469122 / 0.255139 (0.213983) | 0.509953 / 0.283200 (0.226753) | 0.037800 / 0.141683 (-0.103883) | 1.877589 / 1.452155 (0.425434) | 2.014913 / 1.492716 (0.522197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.309146 / 0.018006 (0.291140) | 0.644390 / 0.000490 (0.643900) | 0.005017 / 0.000200 (0.004817) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032964 / 0.037411 (-0.004447) | 0.103236 / 0.014526 (0.088711) | 0.119950 / 0.176557 (-0.056607) | 0.207674 / 0.737135 (-0.529461) | 0.117278 / 0.296338 (-0.179060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.605464 / 0.215209 (0.390255) | 6.027805 / 2.077655 (3.950150) | 2.719725 / 1.504120 (1.215605) | 2.262752 / 1.541195 (0.721558) | 2.330310 / 1.468490 (0.861820) | 0.862537 / 4.584777 (-3.722240) | 5.347080 / 3.745712 (1.601368) | 4.792170 / 5.269862 (-0.477691) | 3.103694 / 4.565676 (-1.461983) | 0.103646 / 0.424275 (-0.320629) | 0.009411 / 0.007607 (0.001804) | 0.743052 / 0.226044 (0.517008) | 7.289684 / 2.268929 (5.020755) | 3.436530 / 55.444624 (-52.008094) | 2.722440 / 6.876477 (-4.154036) | 2.952380 / 2.142072 (0.810308) | 1.047688 / 4.805227 (-3.757539) | 0.212724 / 6.500664 (-6.287940) | 0.081473 / 0.075469 (0.006004) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.714437 / 1.841788 (-0.127351) | 24.384330 / 8.074308 (16.310022) | 22.444162 / 10.191392 (12.252770) | 0.226264 / 0.680424 (-0.454160) | 0.030530 / 0.534201 (-0.503671) | 0.473999 / 0.579283 (-0.105284) | 0.575005 / 0.434364 (0.140641) | 0.542789 / 0.540337 (0.002451) | 0.776079 / 1.386936 (-0.610857) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009366 / 0.011353 (-0.001987) | 0.005239 / 0.011008 (-0.005769) | 0.085116 / 0.038508 (0.046608) | 0.089600 / 0.023109 (0.066491) | 0.485778 / 0.275898 (0.209880) | 0.540054 / 0.323480 (0.216574) | 0.006290 / 0.007986 (-0.001695) | 0.004054 / 0.004328 (-0.000274) | 0.083535 / 0.004250 (0.079284) | 0.067200 / 0.037052 (0.030148) | 0.519520 / 0.258489 (0.261031) | 0.544049 / 0.293841 (0.250208) | 0.054300 / 0.128546 (-0.074246) | 0.013650 / 0.075646 (-0.061996) | 0.102515 / 0.419271 (-0.316757) | 0.063054 / 0.043533 (0.019522) | 0.491724 / 0.255139 (0.236585) | 0.547498 / 0.283200 (0.264298) | 0.039266 / 0.141683 (-0.102416) | 1.801226 / 1.452155 (0.349071) | 1.861778 / 1.492716 (0.369061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313009 / 0.018006 (0.295003) | 0.587695 / 0.000490 (0.587205) | 0.004972 / 0.000200 (0.004772) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.091154 / 0.014526 (0.076628) | 0.110505 / 0.176557 (-0.066052) | 0.164204 / 0.737135 (-0.572932) | 0.107812 / 0.296338 (-0.188526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.610535 / 0.215209 (0.395326) | 6.162517 / 2.077655 (4.084862) | 2.866718 / 1.504120 (1.362598) | 2.542412 / 1.541195 (1.001218) | 2.584136 / 1.468490 (1.115645) | 0.874319 / 4.584777 (-3.710458) | 5.257184 / 3.745712 (1.511472) | 4.705840 / 5.269862 (-0.564022) | 2.971708 / 4.565676 (-1.593969) | 0.099026 / 0.424275 (-0.325249) | 0.009142 / 0.007607 (0.001535) | 0.728660 / 0.226044 (0.502615) | 7.560922 / 2.268929 (5.291994) | 3.439521 / 55.444624 (-52.005103) | 2.854730 / 6.876477 (-4.021746) | 3.088951 / 2.142072 (0.946879) | 0.973621 / 4.805227 (-3.831606) | 0.209792 / 6.500664 (-6.290872) | 0.081107 / 0.075469 (0.005638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716809 / 1.841788 (-0.124978) | 24.386927 / 8.074308 (16.312619) | 20.715524 / 10.191392 (10.524131) | 0.260831 / 0.680424 (-0.419592) | 0.030701 / 0.534201 (-0.503500) | 0.490018 / 0.579283 (-0.089265) | 0.590424 / 0.434364 (0.156060) | 0.589942 / 0.540337 (0.049604) | 0.798094 / 1.386936 (-0.588842) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0a77dc943de68a17f23f141517028c734c78623 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006592 / 0.011353 (-0.004761) | 0.003880 / 0.011008 (-0.007128) | 0.083761 / 0.038508 (0.045253) | 0.075966 / 0.023109 (0.052857) | 0.315291 / 0.275898 (0.039393) | 0.355920 / 0.323480 (0.032440) | 0.004972 / 0.007986 (-0.003014) | 0.003053 / 0.004328 (-0.001275) | 0.063553 / 0.004250 (0.059302) | 0.050794 / 0.037052 (0.013742) | 0.317681 / 0.258489 (0.059192) | 0.361991 / 0.293841 (0.068150) | 0.028119 / 0.128546 (-0.100427) | 0.008203 / 0.075646 (-0.067443) | 0.271756 / 0.419271 (-0.147516) | 0.046701 / 0.043533 (0.003168) | 0.316520 / 0.255139 (0.061381) | 0.350499 / 0.283200 (0.067300) | 0.022399 / 0.141683 (-0.119284) | 1.416017 / 1.452155 (-0.036138) | 1.503087 / 1.492716 (0.010371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208250 / 0.018006 (0.190244) | 0.470345 / 0.000490 (0.469856) | 0.003687 / 0.000200 (0.003487) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026163 / 0.037411 (-0.011248) | 0.083315 / 0.014526 (0.068789) | 0.088541 / 0.176557 (-0.088015) | 0.150078 / 0.737135 (-0.587057) | 0.088862 / 0.296338 (-0.207476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404911 / 0.215209 (0.189702) | 4.059257 / 2.077655 (1.981602) | 1.890987 / 1.504120 (0.386867) | 1.726608 / 1.541195 (0.185413) | 1.767479 / 1.468490 (0.298989) | 0.518826 / 4.584777 (-4.065951) | 3.212145 / 3.745712 (-0.533567) | 3.029933 / 5.269862 (-2.239929) | 2.000203 / 4.565676 (-2.565474) | 0.059631 / 0.424275 (-0.364644) | 0.006707 / 0.007607 (-0.000900) | 0.485741 / 0.226044 (0.259697) | 4.871938 / 2.268929 (2.603010) | 2.418856 / 55.444624 (-53.025769) | 2.084847 / 6.876477 (-4.791630) | 2.207992 / 2.142072 (0.065920) | 0.614354 / 4.805227 (-4.190873) | 0.128932 / 6.500664 (-6.371732) | 0.062342 / 0.075469 (-0.013127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325792 / 1.841788 (-0.515995) | 19.718995 / 8.074308 (11.644687) | 15.278535 / 10.191392 (5.087143) | 0.146719 / 0.680424 (-0.533705) | 0.017718 / 0.534201 (-0.516483) | 0.335709 / 0.579283 (-0.243574) | 0.378060 / 0.434364 (-0.056304) | 0.391135 / 0.540337 (-0.149202) | 0.548045 / 1.386936 (-0.838891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.003742 / 0.011008 (-0.007266) | 0.064405 / 0.038508 (0.025897) | 0.077618 / 0.023109 (0.054509) | 0.365325 / 0.275898 (0.089427) | 0.408109 / 0.323480 (0.084629) | 0.004909 / 0.007986 (-0.003076) | 0.002972 / 0.004328 (-0.001356) | 0.063933 / 0.004250 (0.059682) | 0.052916 / 0.037052 (0.015863) | 0.370891 / 0.258489 (0.112402) | 0.412134 / 0.293841 (0.118293) | 0.028171 / 0.128546 (-0.100375) | 0.008150 / 0.075646 (-0.067497) | 0.069248 / 0.419271 (-0.350024) | 0.042353 / 0.043533 (-0.001180) | 0.368117 / 0.255139 (0.112978) | 0.397548 / 0.283200 (0.114348) | 0.022967 / 0.141683 (-0.118716) | 1.472740 / 1.452155 (0.020586) | 1.524028 / 1.492716 (0.031311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256854 / 0.018006 (0.238848) | 0.471499 / 0.000490 (0.471009) | 0.009609 / 0.000200 (0.009409) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009433) | 0.086741 / 0.014526 (0.072215) | 0.091189 / 0.176557 (-0.085368) | 0.146117 / 0.737135 (-0.591018) | 0.092358 / 0.296338 (-0.203980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426356 / 0.215209 (0.211147) | 4.263782 / 2.077655 (2.186127) | 2.178198 / 1.504120 (0.674078) | 2.015405 / 1.541195 (0.474211) | 2.055966 / 1.468490 (0.587476) | 0.507531 / 4.584777 (-4.077246) | 3.175967 / 3.745712 (-0.569745) | 3.055697 / 5.269862 (-2.214165) | 1.987663 / 4.565676 (-2.578014) | 0.058452 / 0.424275 (-0.365823) | 0.006944 / 0.007607 (-0.000663) | 0.502534 / 0.226044 (0.276489) | 5.024693 / 2.268929 (2.755765) | 2.754971 / 55.444624 (-52.689653) | 2.470845 / 6.876477 (-4.405632) | 2.698675 / 2.142072 (0.556602) | 0.602357 / 4.805227 (-4.202871) | 0.129490 / 6.500664 (-6.371174) | 0.065127 / 0.075469 (-0.010342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398487 / 1.841788 (-0.443301) | 19.692279 / 8.074308 (11.617971) | 15.124064 / 10.191392 (4.932672) | 0.148938 / 0.680424 (-0.531486) | 0.017418 / 0.534201 (-0.516783) | 0.340480 / 0.579283 (-0.238803) | 0.377223 / 0.434364 (-0.057141) | 0.405303 / 0.540337 (-0.135034) | 0.548923 / 1.386936 (-0.838013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58e62af004b6b8b84dcfd897a4bc71637cfa6c3f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006433 / 0.011353 (-0.004920) | 0.004002 / 0.011008 (-0.007006) | 0.084130 / 0.038508 (0.045622) | 0.070628 / 0.023109 (0.047519) | 0.312372 / 0.275898 (0.036474) | 0.343993 / 0.323480 (0.020513) | 0.003936 / 0.007986 (-0.004050) | 0.003336 / 0.004328 (-0.000993) | 0.064715 / 0.004250 (0.060465) | 0.052511 / 0.037052 (0.015458) | 0.314092 / 0.258489 (0.055603) | 0.363152 / 0.293841 (0.069311) | 0.030898 / 0.128546 (-0.097648) | 0.008396 / 0.075646 (-0.067250) | 0.288083 / 0.419271 (-0.131188) | 0.051654 / 0.043533 (0.008122) | 0.315252 / 0.255139 (0.060113) | 0.346756 / 0.283200 (0.063556) | 0.025167 / 0.141683 (-0.116515) | 1.487265 / 1.452155 (0.035110) | 1.557528 / 1.492716 (0.064812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206517 / 0.018006 (0.188510) | 0.458359 / 0.000490 (0.457869) | 0.003719 / 0.000200 (0.003519) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029631 / 0.037411 (-0.007780) | 0.083856 / 0.014526 (0.069330) | 0.340431 / 0.176557 (0.163875) | 0.153864 / 0.737135 (-0.583271) | 0.095951 / 0.296338 (-0.200388) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379182 / 0.215209 (0.163973) | 3.783396 / 2.077655 (1.705741) | 1.835932 / 1.504120 (0.331813) | 1.667563 / 1.541195 (0.126369) | 1.739309 / 1.468490 (0.270818) | 0.478957 / 4.584777 (-4.105820) | 3.521974 / 3.745712 (-0.223738) | 3.237635 / 5.269862 (-2.032227) | 2.000300 / 4.565676 (-2.565377) | 0.056389 / 0.424275 (-0.367887) | 0.007242 / 0.007607 (-0.000365) | 0.452642 / 0.226044 (0.226598) | 4.524339 / 2.268929 (2.255411) | 2.346210 / 55.444624 (-53.098414) | 1.957196 / 6.876477 (-4.919281) | 2.180051 / 2.142072 (0.037979) | 0.570205 / 4.805227 (-4.235022) | 0.131346 / 6.500664 (-6.369318) | 0.059327 / 0.075469 (-0.016142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244709 / 1.841788 (-0.597079) | 19.566277 / 8.074308 (11.491969) | 14.172598 / 10.191392 (3.981206) | 0.166493 / 0.680424 (-0.513931) | 0.018281 / 0.534201 (-0.515920) | 0.391608 / 0.579283 (-0.187675) | 0.402642 / 0.434364 (-0.031722) | 0.464974 / 0.540337 (-0.075364) | 0.637565 / 1.386936 (-0.749371) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006929 / 0.011353 (-0.004424) | 0.004114 / 0.011008 (-0.006894) | 0.064589 / 0.038508 (0.026081) | 0.083334 / 0.023109 (0.060225) | 0.391280 / 0.275898 (0.115382) | 0.426157 / 0.323480 (0.102678) | 0.005336 / 0.007986 (-0.002650) | 0.003395 / 0.004328 (-0.000934) | 0.064560 / 0.004250 (0.060310) | 0.057094 / 0.037052 (0.020042) | 0.398959 / 0.258489 (0.140470) | 0.432470 / 0.293841 (0.138629) | 0.031412 / 0.128546 (-0.097134) | 0.008670 / 0.075646 (-0.066976) | 0.071249 / 0.419271 (-0.348022) | 0.048934 / 0.043533 (0.005401) | 0.384207 / 0.255139 (0.129068) | 0.407992 / 0.283200 (0.124792) | 0.024492 / 0.141683 (-0.117191) | 1.467788 / 1.452155 (0.015634) | 1.541011 / 1.492716 (0.048295) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279607 / 0.018006 (0.261600) | 0.448899 / 0.000490 (0.448410) | 0.020990 / 0.000200 (0.020790) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030313 / 0.037411 (-0.007099) | 0.089209 / 0.014526 (0.074684) | 0.101024 / 0.176557 (-0.075532) | 0.153468 / 0.737135 (-0.583667) | 0.103219 / 0.296338 (-0.193120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429176 / 0.215209 (0.213967) | 4.302234 / 2.077655 (2.224580) | 2.291103 / 1.504120 (0.786983) | 2.126257 / 1.541195 (0.585062) | 2.207090 / 1.468490 (0.738600) | 0.484643 / 4.584777 (-4.100134) | 3.557429 / 3.745712 (-0.188283) | 3.253804 / 5.269862 (-2.016058) | 2.026087 / 4.565676 (-2.539589) | 0.057793 / 0.424275 (-0.366482) | 0.007761 / 0.007607 (0.000154) | 0.504819 / 0.226044 (0.278775) | 5.046868 / 2.268929 (2.777940) | 2.773149 / 55.444624 (-52.671475) | 2.398036 / 6.876477 (-4.478440) | 2.608094 / 2.142072 (0.466021) | 0.630499 / 4.805227 (-4.174729) | 0.135496 / 6.500664 (-6.365168) | 0.061329 / 0.075469 (-0.014140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327124 / 1.841788 (-0.514664) | 19.889796 / 8.074308 (11.815488) | 14.196100 / 10.191392 (4.004708) | 0.161963 / 0.680424 (-0.518461) | 0.018529 / 0.534201 (-0.515672) | 0.392325 / 0.579283 (-0.186958) | 0.404836 / 0.434364 (-0.029528) | 0.475898 / 0.540337 (-0.064439) | 0.633563 / 1.386936 (-0.753373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4684fc1032321abf0d494b0c130ea7c82ebda80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006390 / 0.011353 (-0.004963) | 0.003683 / 0.011008 (-0.007325) | 0.081274 / 0.038508 (0.042766) | 0.062193 / 0.023109 (0.039083) | 0.355360 / 0.275898 (0.079462) | 0.396471 / 0.323480 (0.072992) | 0.003569 / 0.007986 (-0.004416) | 0.003928 / 0.004328 (-0.000400) | 0.062292 / 0.004250 (0.058041) | 0.049700 / 0.037052 (0.012648) | 0.354604 / 0.258489 (0.096115) | 0.419436 / 0.293841 (0.125595) | 0.027151 / 0.128546 (-0.101395) | 0.007954 / 0.075646 (-0.067692) | 0.262231 / 0.419271 (-0.157041) | 0.045483 / 0.043533 (0.001950) | 0.354285 / 0.255139 (0.099146) | 0.385178 / 0.283200 (0.101978) | 0.021183 / 0.141683 (-0.120500) | 1.420785 / 1.452155 (-0.031370) | 1.531545 / 1.492716 (0.038829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202298 / 0.018006 (0.184292) | 0.442172 / 0.000490 (0.441683) | 0.003565 / 0.000200 (0.003366) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024229 / 0.037411 (-0.013183) | 0.074352 / 0.014526 (0.059826) | 0.087530 / 0.176557 (-0.089026) | 0.146478 / 0.737135 (-0.590658) | 0.085145 / 0.296338 (-0.211194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388395 / 0.215209 (0.173186) | 3.877623 / 2.077655 (1.799968) | 1.882444 / 1.504120 (0.378324) | 1.707871 / 1.541195 (0.166676) | 1.772132 / 1.468490 (0.303642) | 0.491937 / 4.584777 (-4.092840) | 3.057947 / 3.745712 (-0.687765) | 2.822390 / 5.269862 (-2.447471) | 1.879719 / 4.565676 (-2.685957) | 0.056830 / 0.424275 (-0.367445) | 0.006415 / 0.007607 (-0.001192) | 0.458945 / 0.226044 (0.232900) | 4.594502 / 2.268929 (2.325574) | 2.339677 / 55.444624 (-53.104948) | 1.983750 / 6.876477 (-4.892727) | 2.173792 / 2.142072 (0.031719) | 0.580390 / 4.805227 (-4.224838) | 0.124568 / 6.500664 (-6.376096) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265108 / 1.841788 (-0.576680) | 18.415254 / 8.074308 (10.340946) | 13.963829 / 10.191392 (3.772437) | 0.148926 / 0.680424 (-0.531498) | 0.016919 / 0.534201 (-0.517282) | 0.331082 / 0.579283 (-0.248201) | 0.345777 / 0.434364 (-0.088587) | 0.381123 / 0.540337 (-0.159214) | 0.543297 / 1.386936 (-0.843639) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003717 / 0.011008 (-0.007291) | 0.063653 / 0.038508 (0.025144) | 0.063723 / 0.023109 (0.040613) | 0.360233 / 0.275898 (0.084335) | 0.398353 / 0.323480 (0.074873) | 0.004696 / 0.007986 (-0.003290) | 0.002876 / 0.004328 (-0.001452) | 0.063057 / 0.004250 (0.058806) | 0.050258 / 0.037052 (0.013206) | 0.362946 / 0.258489 (0.104457) | 0.403260 / 0.293841 (0.109419) | 0.027738 / 0.128546 (-0.100809) | 0.008025 / 0.075646 (-0.067621) | 0.068781 / 0.419271 (-0.350491) | 0.042114 / 0.043533 (-0.001419) | 0.363546 / 0.255139 (0.108407) | 0.385640 / 0.283200 (0.102440) | 0.021757 / 0.141683 (-0.119926) | 1.482364 / 1.452155 (0.030209) | 1.571859 / 1.492716 (0.079143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235628 / 0.018006 (0.217622) | 0.439909 / 0.000490 (0.439419) | 0.003070 / 0.000200 (0.002870) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027045 / 0.037411 (-0.010366) | 0.080413 / 0.014526 (0.065887) | 0.088953 / 0.176557 (-0.087603) | 0.141907 / 0.737135 (-0.595228) | 0.090604 / 0.296338 (-0.205735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423250 / 0.215209 (0.208041) | 4.216510 / 2.077655 (2.138855) | 2.162946 / 1.504120 (0.658826) | 2.014561 / 1.541195 (0.473366) | 2.086347 / 1.468490 (0.617857) | 0.496591 / 4.584777 (-4.088186) | 3.089594 / 3.745712 (-0.656118) | 2.853640 / 5.269862 (-2.416221) | 1.878149 / 4.565676 (-2.687527) | 0.056914 / 0.424275 (-0.367361) | 0.006762 / 0.007607 (-0.000845) | 0.493470 / 0.226044 (0.267426) | 4.929966 / 2.268929 (2.661037) | 2.640885 / 55.444624 (-52.803739) | 2.335950 / 6.876477 (-4.540527) | 2.565866 / 2.142072 (0.423793) | 0.585433 / 4.805227 (-4.219794) | 0.124969 / 6.500664 (-6.375695) | 0.062361 / 0.075469 (-0.013108) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369144 / 1.841788 (-0.472644) | 19.037582 / 8.074308 (10.963274) | 14.069141 / 10.191392 (3.877749) | 0.146469 / 0.680424 (-0.533954) | 0.016911 / 0.534201 (-0.517290) | 0.336802 / 0.579283 (-0.242482) | 0.336411 / 0.434364 (-0.097953) | 0.392360 / 0.540337 (-0.147977) | 0.536078 / 1.386936 (-0.850858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12cfc1196e62847e2e8239fbd727a02cbc86ddec \"CML watermark\")\n" ]
2023-08-07T15:41:25
2023-08-08T15:24:59
2023-08-08T15:16:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6127", "html_url": "https://github.com/huggingface/datasets/pull/6127", "diff_url": "https://github.com/huggingface/datasets/pull/6127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6127.patch", "merged_at": "2023-08-08T15:16:22" }
This PR fixes 3 authentication issues: - Fix authentication when passing `token`. - Fix authentication in `Audio.decode_example` and `Image.decode_example`. - Fix authentication to resolve `data_files` in repositories without script. This PR also fixes our CI so that we properly test when passing `token` and we do not use the token stored in `HfFolder`. Fix #6126. ## Details ### Fix authentication when passing `token` See c0a77dc943de68a17f23f141517028c734c78623 The root issue was caused when the `token` was set in an already instantiated `DownloadConfig` and thus not propagated to `self._storage_options`: ```python download_config.token = token ``` As this usage pattern is very common, the fix consists in overriding `DownloadConfig.__setattr__`. This fixes authentication issues in the following functions: - `load_dataset` and `load_dataset_builder` - `Dataset.push_to_hub` and `Dataset.push_to_hub` - `inspect.get_dataset_config_info`, `inspect.get_dataset_infos` and `inspect.get_dataset_split_names` ### Fix authentication in `Audio.decode_example` and `Image.decode_example`. See: 58e62af004b6b8b84dcfd897a4bc71637cfa6c3f The `token` was not set because the `repo_id` was wrongly tried to be parsed from an HTTP URL (`"http://..."`), instead of an HFFileSystem URL (`"hf://"`) ### Fix authentication to resolve `data_files` in repositories without script See: e4684fc1032321abf0d494b0c130ea7c82ebda80 This is fixed by passing `download_config` to the function `create_builder_configs_from_metadata_configs`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6127/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6126/comments
https://api.github.com/repos/huggingface/datasets/issues/6126/events
https://github.com/huggingface/datasets/issues/6126
1,839,675,320
I_kwDODunzps5tpze4
6,126
Private datasets do not load when passing token
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.", "I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything worked well even with `DownloadMode.FORCE_REDOWNLOAD`).", "We are planning to do a patch release today, after the merge of the fix:\r\n- #6127\r\n\r\nIn the meantime, the problem can be circumvented by passing `download_config` instead:\r\n```python\r\nfrom datasets import DownloadConfig, load_dataset\r\n\r\nload_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n``` ", "> We are planning to do a patch release today, after the merge of the fix:\r\n> \r\n> * [Fix authentication issues #6127](https://github.com/huggingface/datasets/pull/6127)\r\n> \r\n> \r\n> In the meantime, the problem can be circumvented by passing `download_config` instead:\r\n> \r\n> ```python\r\n> from datasets import DownloadConfig, load_dataset\r\n> \r\n> load_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n> ```\r\n\r\nThis did not work for me (there was some other error with the split being an unexpected size 0). Downgrading to 2.13 fixed it...." ]
2023-08-07T15:06:47
2023-08-08T15:16:23
2023-08-08T15:16:23
MEMBER
null
null
null
### Describe the bug Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`. This is a non-planned backward incompatible breaking change. Note that private datasets do load if instead `download_config` is passed: ```python from datasets import DownloadConfig, load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>")) ds ``` gives ``` Dataset({ features: ['text'], num_rows: 4 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") ``` gives ``` --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) [<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1793 download_config = download_config.copy() if download_config else DownloadConfig() 1794 download_config.storage_options.update(storage_options) -> 1795 dataset_module = dataset_module_factory( 1796 path, 1797 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1485 if isinstance(e1, EmptyDatasetError): -> 1486 raise e1 from None 1487 if isinstance(e1, FileNotFoundError): 1488 raise FileNotFoundError( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1474 download_config=download_config, 1475 download_mode=download_mode, -> 1476 ).get_module() 1477 except ( 1478 Exception [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self) 1030 sanitize_patterns(self.data_files) 1031 if self.data_files is not None -> 1032 else get_data_patterns(base_path, download_config=self.download_config) 1033 ) 1034 data_files = DataFilesDict.from_patterns( [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config) 457 return _get_data_files_patterns(resolver) 458 except FileNotFoundError: --> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 460 461 EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files ``` ### Expected behavior The dataset should load. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6126/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6125/comments
https://api.github.com/repos/huggingface/datasets/issues/6125/events
https://github.com/huggingface/datasets/issues/6125
1,837,980,986
I_kwDODunzps5tjV06
6,125
Reinforcement Learning and Robotics are not task categories in HF datasets metadata
{ "login": "StoneT2000", "id": 35373228, "node_id": "MDQ6VXNlcjM1MzczMjI4", "avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StoneT2000", "html_url": "https://github.com/StoneT2000", "followers_url": "https://api.github.com/users/StoneT2000/followers", "following_url": "https://api.github.com/users/StoneT2000/following{/other_user}", "gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions", "organizations_url": "https://api.github.com/users/StoneT2000/orgs", "repos_url": "https://api.github.com/users/StoneT2000/repos", "events_url": "https://api.github.com/users/StoneT2000/events{/privacy}", "received_events_url": "https://api.github.com/users/StoneT2000/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-05T23:59:42
2023-08-05T23:59:42
null
NONE
null
null
null
### Describe the bug In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves. ### Steps to reproduce the bug 1. Create a new dataset on Hugging face 2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit ### Expected behavior Expected to be able to add RL and robotics as task categories as some previous datasets have these tags ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6125/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6124/comments
https://api.github.com/repos/huggingface/datasets/issues/6124/events
https://github.com/huggingface/datasets/issues/6124
1,837,868,112
I_kwDODunzps5ti6RQ
6,124
Datasets crashing runs due to KeyError
{ "login": "conceptofmind", "id": 25208228, "node_id": "MDQ6VXNlcjI1MjA4MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conceptofmind", "html_url": "https://github.com/conceptofmind", "followers_url": "https://api.github.com/users/conceptofmind/followers", "following_url": "https://api.github.com/users/conceptofmind/following{/other_user}", "gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}", "starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions", "organizations_url": "https://api.github.com/users/conceptofmind/orgs", "repos_url": "https://api.github.com/users/conceptofmind/repos", "events_url": "https://api.github.com/users/conceptofmind/events{/privacy}", "received_events_url": "https://api.github.com/users/conceptofmind/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-05T17:48:56
2023-08-05T17:48:56
null
NONE
null
null
null
### Describe the bug Hi all, I have been running into a pretty persistent issue recently when trying to load datasets. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` I receive a KeyError which crashes the runs. ``` Traceback (most recent call last): main() train_dataset = load_dataset( ^^^^^^^^^^^^^ builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ raise e1 from None ).get_module() ^^^^^^^^^^^^ else get_data_patterns(base_path, download_config=self.download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return _get_data_files_patterns(resolver) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ data_files = pattern_resolver(pattern) ^^^^^^^^^^^^^^^^^^^^^^^^^ fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)] ^^^^^^^^^^^^^^ allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs): listing = self.ls(path, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "last_modified": parse_datetime(tree_item["lastCommit"]["date"]), ~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 'lastCommit' ``` Any help would be greatly appreciated. Thank you, Enrico ### Steps to reproduce the bug Load the dataset from the Huggingface hub. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` ### Expected behavior Loads the dataset. ### Environment info datasets-2.14.3 CUDA 11.8 Python 3.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6124/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6123/comments
https://api.github.com/repos/huggingface/datasets/issues/6123/events
https://github.com/huggingface/datasets/issues/6123
1,837,789,294
I_kwDODunzps5tinBu
6,123
Inaccurate Bounding Boxes in "wildreceipt" Dataset
{ "login": "HamzaGbada", "id": 50714796, "node_id": "MDQ6VXNlcjUwNzE0Nzk2", "avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HamzaGbada", "html_url": "https://github.com/HamzaGbada", "followers_url": "https://api.github.com/users/HamzaGbada/followers", "following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}", "gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}", "starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions", "organizations_url": "https://api.github.com/users/HamzaGbada/orgs", "repos_url": "https://api.github.com/users/HamzaGbada/repos", "events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}", "received_events_url": "https://api.github.com/users/HamzaGbada/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-05T14:34:13
2023-08-06T13:27:25
null
NONE
null
null
null
### Describe the bug I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset. To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face: **Example 1:** ![image](https://github.com/huggingface/datasets/assets/50714796/7a6604d2-899d-4102-a008-1a28c90698f1) ![image](https://github.com/huggingface/datasets/assets/50714796/eba458c7-d3af-4868-a520-8b683aa96f66) ![image](https://github.com/huggingface/datasets/assets/50714796/9f394891-5f5b-46f7-8e52-071b724aedab) **Example 2:** ![image](https://github.com/huggingface/datasets/assets/50714796/a2b2a8d3-124e-4990-b64a-5133cf4be2fe) ![image](https://github.com/huggingface/datasets/assets/50714796/6ee25642-35aa-40ad-ac1e-899d33be90df) ![image](https://github.com/huggingface/datasets/assets/50714796/5e42ff91-9fc4-4520-8803-0e225656f96c) It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar). This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated. ### Steps to reproduce the bug ```python import matplotlib.pyplot as plt from datasets import load_dataset # Define functions to convert bounding box formats def convert_format1(box): x, y, w, h = box x2, y2 = x + w, y + h return [x, y, x2, y2] def convert_format2(box): x1, y1, x2, y2 = box return [x1, y1, x2, y2] def plot_cropped_image(image, box, title): cropped_image = image.crop(box) plt.imshow(cropped_image) plt.title(title) plt.axis('off') plt.savefig(title+'.png') plt.show() doc_index = 1 word_index = 3 dataset = load_dataset("Theivaprakasham/wildreceipt")['train'] bbox_hugging_face = dataset[doc_index]['bboxes'][word_index] text_unit_face = dataset[doc_index]['words'][word_index] common_box_hugface_1 = convert_format1(bbox_hugging_face) common_box_hugface_2 = convert_format2(bbox_hugging_face) plot_cropped_image(image_hugging, common_box_hugface_1, f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}') plot_cropped_image(image_hugging, common_box_hugface_2, f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}') ``` ### Expected behavior The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset. ### Environment info - Python version: 3.8 - Hugging Face datasets version: 2.14.2 - Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6123/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6122/comments
https://api.github.com/repos/huggingface/datasets/issues/6122/events
https://github.com/huggingface/datasets/issues/6122
1,837,335,721
I_kwDODunzps5tg4Sp
6,122
Upload README via `push_to_hub`
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-08-04T21:00:27
2023-08-04T21:01:19
null
NONE
null
null
null
### Feature request `push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually. However, I do discover snippets to intialize a README for every `push_to_hub`: ``` dataset_card = ( DatasetCard( "---\n" + str(dataset_card_data) + "\n---\n" + f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)' ) if dataset_card is None else dataset_card ) HfApi(endpoint=config.HF_ENDPOINT).upload_file( path_or_fileobj=str(dataset_card).encode(), path_in_repo="README.md", repo_id=repo_id, token=token, repo_type="dataset", revision=branch, ) ``` So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation. ### Motivation as elabrated above. ### Your contribution I might be able to make a pr.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6122/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6121/comments
https://api.github.com/repos/huggingface/datasets/issues/6121/events
https://github.com/huggingface/datasets/pull/6121
1,836,761,712
PR_kwDODunzps5XMsWd
6,121
Small typo in the code example of create imagefolder dataset
{ "login": "WangXin93", "id": 19688994, "node_id": "MDQ6VXNlcjE5Njg4OTk0", "avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangXin93", "html_url": "https://github.com/WangXin93", "followers_url": "https://api.github.com/users/WangXin93/followers", "following_url": "https://api.github.com/users/WangXin93/following{/other_user}", "gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions", "organizations_url": "https://api.github.com/users/WangXin93/orgs", "repos_url": "https://api.github.com/users/WangXin93/repos", "events_url": "https://api.github.com/users/WangXin93/events{/privacy}", "received_events_url": "https://api.github.com/users/WangXin93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin" ]
2023-08-04T13:36:59
2023-08-04T13:45:32
2023-08-04T13:41:43
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6121", "html_url": "https://github.com/huggingface/datasets/pull/6121", "diff_url": "https://github.com/huggingface/datasets/pull/6121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6121.patch", "merged_at": null }
Fix type of code example of load imagefolder dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6121/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6120/comments
https://api.github.com/repos/huggingface/datasets/issues/6120/events
https://github.com/huggingface/datasets/issues/6120
1,836,026,938
I_kwDODunzps5tb4w6
6,120
Lookahead streaming support?
{ "login": "PicoCreator", "id": 17175484, "node_id": "MDQ6VXNlcjE3MTc1NDg0", "avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PicoCreator", "html_url": "https://github.com/PicoCreator", "followers_url": "https://api.github.com/users/PicoCreator/followers", "following_url": "https://api.github.com/users/PicoCreator/following{/other_user}", "gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}", "starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions", "organizations_url": "https://api.github.com/users/PicoCreator/orgs", "repos_url": "https://api.github.com/users/PicoCreator/repos", "events_url": "https://api.github.com/users/PicoCreator/events{/privacy}", "received_events_url": "https://api.github.com/users/PicoCreator/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-08-04T04:01:52
2023-08-04T04:02:04
null
NONE
null
null
null
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mapping instruction/tokenizer specific) Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained. With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches. ### Motivation Faster streaming performance, while training over extra large TB sized datasets ### Your contribution I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6120/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6119/comments
https://api.github.com/repos/huggingface/datasets/issues/6119/events
https://github.com/huggingface/datasets/pull/6119
1,835,996,350
PR_kwDODunzps5XKI19
6,119
[Docs] Add description of `select_columns` to guide
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6119). All of your documentation changes will be reflected on that endpoint." ]
2023-08-04T03:13:30
2023-08-04T23:15:51
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6119", "html_url": "https://github.com/huggingface/datasets/pull/6119", "diff_url": "https://github.com/huggingface/datasets/pull/6119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6119.patch", "merged_at": null }
Closes #6116
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6119/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6118/comments
https://api.github.com/repos/huggingface/datasets/issues/6118/events
https://github.com/huggingface/datasets/issues/6118
1,835,940,417
I_kwDODunzps5tbjpB
6,118
IterableDataset.from_generator() fails with pickle error when provided a generator or iterator
{ "login": "finkga", "id": 1281051, "node_id": "MDQ6VXNlcjEyODEwNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finkga", "html_url": "https://github.com/finkga", "followers_url": "https://api.github.com/users/finkga/followers", "following_url": "https://api.github.com/users/finkga/following{/other_user}", "gists_url": "https://api.github.com/users/finkga/gists{/gist_id}", "starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finkga/subscriptions", "organizations_url": "https://api.github.com/users/finkga/orgs", "repos_url": "https://api.github.com/users/finkga/repos", "events_url": "https://api.github.com/users/finkga/events{/privacy}", "received_events_url": "https://api.github.com/users/finkga/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-04T01:45:04
2023-08-04T01:45:04
null
NONE
null
null
null
### Describe the bug **Description** Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator. **Code example** ``` def line_generator(files: List[Path]): if isinstance(files, str): files = [Path(files)] for file in files: if isinstance(file, str): file = Path(file) yield from open(file,'r').readlines() ... model_training_files = ['file1.txt', 'file2.txt', 'file3.txt'] train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files)) ``` **Traceback** Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__ self.gen.throw(type, value, traceback) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields yield File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps dump(obj, file) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump Pickler(file, recurse=True).dump(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump StockPickler.dump(self, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump self.save(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems save(v) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save rv = reduce(self.proto) TypeError: cannot pickle 'generator' object ### Steps to reproduce the bug 1. Create a set of text files to iterate over. 2. Create a generator that returns the lines in each file until all files are exhausted. 3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator(). 4. Wait for the explosion. ### Expected behavior I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function. ### Environment info datasets.__version__ == '2.13.1' Python 3.9.6 Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6118/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6117/comments
https://api.github.com/repos/huggingface/datasets/issues/6117/events
https://github.com/huggingface/datasets/pull/6117
1,835,213,848
PR_kwDODunzps5XHktw
6,117
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012516 / 0.011353 (0.001163) | 0.004725 / 0.011008 (-0.006283) | 0.112245 / 0.038508 (0.073736) | 0.079146 / 0.023109 (0.056037) | 0.386415 / 0.275898 (0.110517) | 0.420441 / 0.323480 (0.096961) | 0.005682 / 0.007986 (-0.002304) | 0.004169 / 0.004328 (-0.000160) | 0.077847 / 0.004250 (0.073597) | 0.055763 / 0.037052 (0.018711) | 0.385529 / 0.258489 (0.127040) | 0.422711 / 0.293841 (0.128870) | 0.047212 / 0.128546 (-0.081334) | 0.013711 / 0.075646 (-0.061935) | 0.342856 / 0.419271 (-0.076416) | 0.066788 / 0.043533 (0.023255) | 0.380728 / 0.255139 (0.125589) | 0.416241 / 0.283200 (0.133041) | 0.034676 / 0.141683 (-0.107007) | 1.679661 / 1.452155 (0.227506) | 1.838014 / 1.492716 (0.345297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219556 / 0.018006 (0.201550) | 0.524728 / 0.000490 (0.524238) | 0.005045 / 0.000200 (0.004845) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.085937 / 0.014526 (0.071412) | 0.099245 / 0.176557 (-0.077311) | 0.158995 / 0.737135 (-0.578141) | 0.101504 / 0.296338 (-0.194835) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582200 / 0.215209 (0.366991) | 5.794340 / 2.077655 (3.716685) | 2.473635 / 1.504120 (0.969515) | 2.168135 / 1.541195 (0.626941) | 2.215886 / 1.468490 (0.747396) | 0.855599 / 4.584777 (-3.729178) | 5.003067 / 3.745712 (1.257354) | 4.503566 / 5.269862 (-0.766295) | 2.912248 / 4.565676 (-1.653428) | 0.103267 / 0.424275 (-0.321008) | 0.012114 / 0.007607 (0.004507) | 0.712240 / 0.226044 (0.486196) | 7.131946 / 2.268929 (4.863017) | 3.280052 / 55.444624 (-52.164573) | 2.583472 / 6.876477 (-4.293004) | 2.820758 / 2.142072 (0.678686) | 1.132097 / 4.805227 (-3.673131) | 0.232191 / 6.500664 (-6.268473) | 0.082966 / 0.075469 (0.007497) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581125 / 1.841788 (-0.260662) | 22.723878 / 8.074308 (14.649570) | 19.969347 / 10.191392 (9.777955) | 0.234365 / 0.680424 (-0.446059) | 0.030245 / 0.534201 (-0.503956) | 0.470843 / 0.579283 (-0.108440) | 0.558069 / 0.434364 (0.123705) | 0.534878 / 0.540337 (-0.005460) | 0.801025 / 1.386936 (-0.585911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008524 / 0.011353 (-0.002829) | 0.005083 / 0.011008 (-0.005925) | 0.078054 / 0.038508 (0.039546) | 0.082025 / 0.023109 (0.058915) | 0.458027 / 0.275898 (0.182129) | 0.498232 / 0.323480 (0.174752) | 0.005938 / 0.007986 (-0.002048) | 0.003776 / 0.004328 (-0.000553) | 0.080413 / 0.004250 (0.076163) | 0.060485 / 0.037052 (0.023433) | 0.462816 / 0.258489 (0.204327) | 0.513970 / 0.293841 (0.220129) | 0.047574 / 0.128546 (-0.080973) | 0.013424 / 0.075646 (-0.062222) | 0.087707 / 0.419271 (-0.331565) | 0.065007 / 0.043533 (0.021474) | 0.465844 / 0.255139 (0.210705) | 0.498474 / 0.283200 (0.215274) | 0.033518 / 0.141683 (-0.108164) | 1.737507 / 1.452155 (0.285352) | 1.848291 / 1.492716 (0.355574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316710 / 0.018006 (0.298703) | 0.504415 / 0.000490 (0.503925) | 0.042128 / 0.000200 (0.041928) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032097 / 0.037411 (-0.005314) | 0.099371 / 0.014526 (0.084845) | 0.109311 / 0.176557 (-0.067246) | 0.177373 / 0.737135 (-0.559762) | 0.110753 / 0.296338 (-0.185585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688060 / 0.215209 (0.472851) | 6.255219 / 2.077655 (4.177564) | 2.696845 / 1.504120 (1.192725) | 2.395424 / 1.541195 (0.854230) | 2.414870 / 1.468490 (0.946380) | 0.865704 / 4.584777 (-3.719073) | 5.086828 / 3.745712 (1.341116) | 4.648107 / 5.269862 (-0.621754) | 3.091119 / 4.565676 (-1.474558) | 0.101787 / 0.424275 (-0.322489) | 0.008829 / 0.007607 (0.001222) | 0.772398 / 0.226044 (0.546354) | 7.700366 / 2.268929 (5.431438) | 3.608632 / 55.444624 (-51.835992) | 2.923309 / 6.876477 (-3.953168) | 2.952141 / 2.142072 (0.810069) | 1.093006 / 4.805227 (-3.712221) | 0.224363 / 6.500664 (-6.276301) | 0.074927 / 0.075469 (-0.000542) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638414 / 1.841788 (-0.203374) | 23.486781 / 8.074308 (15.412473) | 21.129104 / 10.191392 (10.937712) | 0.259955 / 0.680424 (-0.420469) | 0.027305 / 0.534201 (-0.506895) | 0.464448 / 0.579283 (-0.114835) | 0.553737 / 0.434364 (0.119373) | 0.571318 / 0.540337 (0.030981) | 0.772917 / 1.386936 (-0.614019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ec5ee9e78b464364796651d995823c7ecb0f951 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009093 / 0.011353 (-0.002260) | 0.005283 / 0.011008 (-0.005725) | 0.112299 / 0.038508 (0.073791) | 0.081341 / 0.023109 (0.058232) | 0.363799 / 0.275898 (0.087901) | 0.409261 / 0.323480 (0.085781) | 0.006400 / 0.007986 (-0.001586) | 0.003965 / 0.004328 (-0.000363) | 0.074389 / 0.004250 (0.070139) | 0.060654 / 0.037052 (0.023602) | 0.391046 / 0.258489 (0.132557) | 0.430514 / 0.293841 (0.136673) | 0.054900 / 0.128546 (-0.073646) | 0.017972 / 0.075646 (-0.057675) | 0.410875 / 0.419271 (-0.008396) | 0.067405 / 0.043533 (0.023873) | 0.371468 / 0.255139 (0.116329) | 0.435061 / 0.283200 (0.151861) | 0.038063 / 0.141683 (-0.103620) | 1.733509 / 1.452155 (0.281354) | 1.833899 / 1.492716 (0.341182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243230 / 0.018006 (0.225224) | 0.605636 / 0.000490 (0.605146) | 0.004890 / 0.000200 (0.004690) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027624 / 0.037411 (-0.009787) | 0.084799 / 0.014526 (0.070273) | 0.104405 / 0.176557 (-0.072152) | 0.165383 / 0.737135 (-0.571752) | 0.102083 / 0.296338 (-0.194255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578334 / 0.215209 (0.363125) | 5.369520 / 2.077655 (3.291866) | 2.294174 / 1.504120 (0.790055) | 2.054195 / 1.541195 (0.513000) | 2.007304 / 1.468490 (0.538814) | 0.839283 / 4.584777 (-3.745494) | 5.262288 / 3.745712 (1.516576) | 4.363346 / 5.269862 (-0.906516) | 2.854903 / 4.565676 (-1.710773) | 0.096975 / 0.424275 (-0.327300) | 0.008237 / 0.007607 (0.000630) | 0.646746 / 0.226044 (0.420702) | 6.250621 / 2.268929 (3.981693) | 2.900377 / 55.444624 (-52.544247) | 2.283238 / 6.876477 (-4.593239) | 2.443785 / 2.142072 (0.301713) | 0.991719 / 4.805227 (-3.813508) | 0.189755 / 6.500664 (-6.310909) | 0.067906 / 0.075469 (-0.007563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.515563 / 1.841788 (-0.326225) | 21.956499 / 8.074308 (13.882191) | 19.161750 / 10.191392 (8.970358) | 0.238199 / 0.680424 (-0.442225) | 0.026771 / 0.534201 (-0.507430) | 0.450195 / 0.579283 (-0.129088) | 0.585168 / 0.434364 (0.150804) | 0.522945 / 0.540337 (-0.017393) | 0.776244 / 1.386936 (-0.610693) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.005021 / 0.011008 (-0.005988) | 0.087308 / 0.038508 (0.048800) | 0.077760 / 0.023109 (0.054650) | 0.425313 / 0.275898 (0.149415) | 0.451470 / 0.323480 (0.127990) | 0.006848 / 0.007986 (-0.001137) | 0.004812 / 0.004328 (0.000484) | 0.071198 / 0.004250 (0.066947) | 0.058325 / 0.037052 (0.021273) | 0.427411 / 0.258489 (0.168922) | 0.466069 / 0.293841 (0.172228) | 0.048686 / 0.128546 (-0.079861) | 0.011841 / 0.075646 (-0.063806) | 0.086225 / 0.419271 (-0.333047) | 0.060500 / 0.043533 (0.016967) | 0.435580 / 0.255139 (0.180441) | 0.456919 / 0.283200 (0.173719) | 0.035094 / 0.141683 (-0.106588) | 1.582805 / 1.452155 (0.130650) | 1.717838 / 1.492716 (0.225122) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283967 / 0.018006 (0.265960) | 0.517496 / 0.000490 (0.517006) | 0.014747 / 0.000200 (0.014547) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083835 / 0.014526 (0.069309) | 0.099157 / 0.176557 (-0.077400) | 0.173210 / 0.737135 (-0.563925) | 0.094212 / 0.296338 (-0.202127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535720 / 0.215209 (0.320511) | 5.273730 / 2.077655 (3.196075) | 2.422560 / 1.504120 (0.918440) | 2.131416 / 1.541195 (0.590222) | 2.192000 / 1.468490 (0.723510) | 0.708469 / 4.584777 (-3.876308) | 4.758092 / 3.745712 (1.012380) | 3.940729 / 5.269862 (-1.329133) | 2.553093 / 4.565676 (-2.012583) | 0.084895 / 0.424275 (-0.339380) | 0.008730 / 0.007607 (0.001123) | 0.646975 / 0.226044 (0.420930) | 6.294811 / 2.268929 (4.025883) | 3.293964 / 55.444624 (-52.150660) | 2.568985 / 6.876477 (-4.307492) | 2.743786 / 2.142072 (0.601713) | 0.899733 / 4.805227 (-3.905494) | 0.193484 / 6.500664 (-6.307181) | 0.070012 / 0.075469 (-0.005457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502255 / 1.841788 (-0.339532) | 20.690234 / 8.074308 (12.615926) | 18.375791 / 10.191392 (8.184399) | 0.200135 / 0.680424 (-0.480289) | 0.029434 / 0.534201 (-0.504767) | 0.477267 / 0.579283 (-0.102016) | 0.566869 / 0.434364 (0.132505) | 0.543756 / 0.540337 (0.003418) | 0.700476 / 1.386936 (-0.686460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef17d9fd6c648bb41d43ba301c3de4d7b6f833d8 \"CML watermark\")\n" ]
2023-08-03T14:46:04
2023-08-03T14:56:59
2023-08-03T14:46:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6117", "html_url": "https://github.com/huggingface/datasets/pull/6117", "diff_url": "https://github.com/huggingface/datasets/pull/6117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6117.patch", "merged_at": "2023-08-03T14:46:18" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6117/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6116/comments
https://api.github.com/repos/huggingface/datasets/issues/6116/events
https://github.com/huggingface/datasets/issues/6116
1,835,098,484
I_kwDODunzps5tYWF0
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Great idea, feel free to open a PR! :)" ]
2023-08-03T13:45:10
2023-08-03T17:40:58
null
NONE
null
null
null
### Feature request The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide. ### Motivation This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480. Mentioning it in the guide would help future users discover this added feature. ### Your contribution I could submit a PR to add a brief description of the function to said guide.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6116/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6115/comments
https://api.github.com/repos/huggingface/datasets/issues/6115/events
https://github.com/huggingface/datasets/pull/6115
1,834,765,485
PR_kwDODunzps5XGChP
6,115
Release: 2.14.3
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007578 / 0.011353 (-0.003775) | 0.004271 / 0.011008 (-0.006738) | 0.086607 / 0.038508 (0.048098) | 0.063209 / 0.023109 (0.040099) | 0.351724 / 0.275898 (0.075826) | 0.399261 / 0.323480 (0.075781) | 0.004767 / 0.007986 (-0.003219) | 0.003487 / 0.004328 (-0.000842) | 0.071483 / 0.004250 (0.067233) | 0.051281 / 0.037052 (0.014229) | 0.387726 / 0.258489 (0.129237) | 0.408446 / 0.293841 (0.114605) | 0.041189 / 0.128546 (-0.087357) | 0.012446 / 0.075646 (-0.063200) | 0.331147 / 0.419271 (-0.088124) | 0.056721 / 0.043533 (0.013188) | 0.361306 / 0.255139 (0.106167) | 0.409651 / 0.283200 (0.126451) | 0.035485 / 0.141683 (-0.106198) | 1.461391 / 1.452155 (0.009236) | 1.554820 / 1.492716 (0.062104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237119 / 0.018006 (0.219113) | 0.518731 / 0.000490 (0.518241) | 0.004192 / 0.000200 (0.003992) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024912 / 0.037411 (-0.012499) | 0.089420 / 0.014526 (0.074894) | 0.091209 / 0.176557 (-0.085347) | 0.152580 / 0.737135 (-0.584555) | 0.089660 / 0.296338 (-0.206678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515223 / 0.215209 (0.300014) | 5.328359 / 2.077655 (3.250705) | 1.974326 / 1.504120 (0.470206) | 1.665216 / 1.541195 (0.124021) | 1.736040 / 1.468490 (0.267550) | 0.734746 / 4.584777 (-3.850031) | 4.186613 / 3.745712 (0.440901) | 3.535760 / 5.269862 (-1.734102) | 2.333247 / 4.565676 (-2.232429) | 0.071845 / 0.424275 (-0.352430) | 0.006147 / 0.007607 (-0.001460) | 0.546649 / 0.226044 (0.320605) | 5.452281 / 2.268929 (3.183353) | 2.512984 / 55.444624 (-52.931640) | 2.104210 / 6.876477 (-4.772267) | 2.409251 / 2.142072 (0.267178) | 0.822797 / 4.805227 (-3.982430) | 0.166648 / 6.500664 (-6.334016) | 0.056350 / 0.075469 (-0.019119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397798 / 1.841788 (-0.443989) | 20.549399 / 8.074308 (12.475091) | 19.118168 / 10.191392 (8.926776) | 0.216361 / 0.680424 (-0.464063) | 0.027064 / 0.534201 (-0.507136) | 0.410762 / 0.579283 (-0.168521) | 0.559225 / 0.434364 (0.124861) | 0.468028 / 0.540337 (-0.072309) | 0.691520 / 1.386936 (-0.695416) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003879 / 0.011008 (-0.007130) | 0.058723 / 0.038508 (0.020215) | 0.057202 / 0.023109 (0.034092) | 0.344397 / 0.275898 (0.068499) | 0.360388 / 0.323480 (0.036908) | 0.005502 / 0.007986 (-0.002483) | 0.004101 / 0.004328 (-0.000227) | 0.058168 / 0.004250 (0.053917) | 0.059112 / 0.037052 (0.022060) | 0.362206 / 0.258489 (0.103717) | 0.386444 / 0.293841 (0.092603) | 0.036613 / 0.128546 (-0.091934) | 0.010482 / 0.075646 (-0.065165) | 0.065850 / 0.419271 (-0.353421) | 0.046528 / 0.043533 (0.002995) | 0.349568 / 0.255139 (0.094429) | 0.360181 / 0.283200 (0.076981) | 0.029030 / 0.141683 (-0.112653) | 1.314569 / 1.452155 (-0.137586) | 1.422393 / 1.492716 (-0.070324) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281554 / 0.018006 (0.263548) | 0.608018 / 0.000490 (0.607528) | 0.004568 / 0.000200 (0.004368) | 0.000182 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023515 / 0.037411 (-0.013896) | 0.072994 / 0.014526 (0.058468) | 0.080688 / 0.176557 (-0.095868) | 0.125904 / 0.737135 (-0.611232) | 0.085457 / 0.296338 (-0.210882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471530 / 0.215209 (0.256321) | 4.796197 / 2.077655 (2.718542) | 2.189181 / 1.504120 (0.685061) | 1.886649 / 1.541195 (0.345454) | 1.871067 / 1.468490 (0.402577) | 0.661043 / 4.584777 (-3.923734) | 4.344027 / 3.745712 (0.598315) | 3.656967 / 5.269862 (-1.612895) | 2.286033 / 4.565676 (-2.279644) | 0.079146 / 0.424275 (-0.345129) | 0.006840 / 0.007607 (-0.000767) | 0.588750 / 0.226044 (0.362706) | 6.301286 / 2.268929 (4.032357) | 3.074702 / 55.444624 (-52.369923) | 2.398739 / 6.876477 (-4.477738) | 2.555057 / 2.142072 (0.412985) | 0.874189 / 4.805227 (-3.931038) | 0.191423 / 6.500664 (-6.309241) | 0.061227 / 0.075469 (-0.014242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472763 / 1.841788 (-0.369024) | 19.441304 / 8.074308 (11.366996) | 15.974276 / 10.191392 (5.782884) | 0.172503 / 0.680424 (-0.507921) | 0.027016 / 0.534201 (-0.507185) | 0.356085 / 0.579283 (-0.223198) | 0.473251 / 0.434364 (0.038887) | 0.427949 / 0.540337 (-0.112388) | 0.588924 / 1.386936 (-0.798013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0973da6e60ac7c1d24229ba6aa6881747b21858a \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003558 / 0.011008 (-0.007450) | 0.080576 / 0.038508 (0.042068) | 0.066542 / 0.023109 (0.043432) | 0.323997 / 0.275898 (0.048099) | 0.369828 / 0.323480 (0.046348) | 0.004896 / 0.007986 (-0.003090) | 0.002909 / 0.004328 (-0.001419) | 0.062553 / 0.004250 (0.058302) | 0.049795 / 0.037052 (0.012742) | 0.321369 / 0.258489 (0.062880) | 0.422860 / 0.293841 (0.129019) | 0.027394 / 0.128546 (-0.101152) | 0.007954 / 0.075646 (-0.067693) | 0.264122 / 0.419271 (-0.155149) | 0.044881 / 0.043533 (0.001349) | 0.316702 / 0.255139 (0.061563) | 0.374718 / 0.283200 (0.091518) | 0.021728 / 0.141683 (-0.119955) | 1.394456 / 1.452155 (-0.057699) | 1.474936 / 1.492716 (-0.017780) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191902 / 0.018006 (0.173896) | 0.430468 / 0.000490 (0.429979) | 0.003790 / 0.000200 (0.003590) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024974 / 0.037411 (-0.012438) | 0.073053 / 0.014526 (0.058527) | 0.083801 / 0.176557 (-0.092756) | 0.143457 / 0.737135 (-0.593678) | 0.085099 / 0.296338 (-0.211240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428411 / 0.215209 (0.213202) | 4.278077 / 2.077655 (2.200422) | 2.230039 / 1.504120 (0.725919) | 2.057191 / 1.541195 (0.515996) | 2.120109 / 1.468490 (0.651619) | 0.495242 / 4.584777 (-4.089535) | 3.031299 / 3.745712 (-0.714413) | 2.802685 / 5.269862 (-2.467176) | 1.839828 / 4.565676 (-2.725849) | 0.056875 / 0.424275 (-0.367401) | 0.006446 / 0.007607 (-0.001161) | 0.498958 / 0.226044 (0.272913) | 4.980440 / 2.268929 (2.711511) | 2.659659 / 55.444624 (-52.784965) | 2.315174 / 6.876477 (-4.561303) | 2.475920 / 2.142072 (0.333848) | 0.586946 / 4.805227 (-4.218282) | 0.124291 / 6.500664 (-6.376373) | 0.060701 / 0.075469 (-0.014768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245062 / 1.841788 (-0.596725) | 18.201444 / 8.074308 (10.127136) | 13.723271 / 10.191392 (3.531879) | 0.130203 / 0.680424 (-0.550221) | 0.016773 / 0.534201 (-0.517428) | 0.332909 / 0.579283 (-0.246374) | 0.347469 / 0.434364 (-0.086895) | 0.381364 / 0.540337 (-0.158973) | 0.541723 / 1.386936 (-0.845213) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005934 / 0.011353 (-0.005419) | 0.003573 / 0.011008 (-0.007435) | 0.062195 / 0.038508 (0.023687) | 0.059026 / 0.023109 (0.035917) | 0.413993 / 0.275898 (0.138095) | 0.459552 / 0.323480 (0.136072) | 0.004610 / 0.007986 (-0.003376) | 0.002907 / 0.004328 (-0.001421) | 0.062983 / 0.004250 (0.058733) | 0.047797 / 0.037052 (0.010745) | 0.415461 / 0.258489 (0.156972) | 0.417424 / 0.293841 (0.123583) | 0.027098 / 0.128546 (-0.101449) | 0.008106 / 0.075646 (-0.067540) | 0.067600 / 0.419271 (-0.351672) | 0.041432 / 0.043533 (-0.002101) | 0.407861 / 0.255139 (0.152722) | 0.430774 / 0.283200 (0.147575) | 0.020738 / 0.141683 (-0.120945) | 1.435127 / 1.452155 (-0.017028) | 1.486961 / 1.492716 (-0.005755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231174 / 0.018006 (0.213168) | 0.421208 / 0.000490 (0.420718) | 0.005411 / 0.000200 (0.005211) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025362 / 0.037411 (-0.012049) | 0.078534 / 0.014526 (0.064008) | 0.085304 / 0.176557 (-0.091252) | 0.139048 / 0.737135 (-0.598087) | 0.087015 / 0.296338 (-0.209323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448506 / 0.215209 (0.233297) | 4.486694 / 2.077655 (2.409039) | 2.488022 / 1.504120 (0.983902) | 2.325321 / 1.541195 (0.784126) | 2.381311 / 1.468490 (0.912821) | 0.502102 / 4.584777 (-4.082675) | 3.018326 / 3.745712 (-0.727386) | 2.824922 / 5.269862 (-2.444940) | 1.857414 / 4.565676 (-2.708263) | 0.057514 / 0.424275 (-0.366761) | 0.006829 / 0.007607 (-0.000779) | 0.521939 / 0.226044 (0.295895) | 5.224393 / 2.268929 (2.955465) | 2.933132 / 55.444624 (-52.511492) | 2.661187 / 6.876477 (-4.215290) | 2.781950 / 2.142072 (0.639878) | 0.592927 / 4.805227 (-4.212300) | 0.126685 / 6.500664 (-6.373979) | 0.064188 / 0.075469 (-0.011281) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351107 / 1.841788 (-0.490681) | 18.344453 / 8.074308 (10.270145) | 13.838788 / 10.191392 (3.647396) | 0.157881 / 0.680424 (-0.522543) | 0.016636 / 0.534201 (-0.517565) | 0.331597 / 0.579283 (-0.247686) | 0.345573 / 0.434364 (-0.088791) | 0.397361 / 0.540337 (-0.142976) | 0.534289 / 1.386936 (-0.852647) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#582e722a76534904c0f3038d32ebb8db88ce9128 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006399 / 0.011353 (-0.004954) | 0.003872 / 0.011008 (-0.007136) | 0.083722 / 0.038508 (0.045214) | 0.068845 / 0.023109 (0.045736) | 0.329112 / 0.275898 (0.053214) | 0.343295 / 0.323480 (0.019815) | 0.005137 / 0.007986 (-0.002849) | 0.003303 / 0.004328 (-0.001026) | 0.064495 / 0.004250 (0.060245) | 0.051448 / 0.037052 (0.014395) | 0.322554 / 0.258489 (0.064065) | 0.361934 / 0.293841 (0.068093) | 0.030821 / 0.128546 (-0.097726) | 0.008482 / 0.075646 (-0.067164) | 0.288136 / 0.419271 (-0.131135) | 0.051935 / 0.043533 (0.008402) | 0.308283 / 0.255139 (0.053144) | 0.343421 / 0.283200 (0.060221) | 0.023639 / 0.141683 (-0.118044) | 1.485442 / 1.452155 (0.033288) | 1.533282 / 1.492716 (0.040565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218163 / 0.018006 (0.200157) | 0.464473 / 0.000490 (0.463983) | 0.003097 / 0.000200 (0.002897) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028650 / 0.037411 (-0.008761) | 0.083295 / 0.014526 (0.068769) | 0.096468 / 0.176557 (-0.080088) | 0.152086 / 0.737135 (-0.585050) | 0.102586 / 0.296338 (-0.193752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393038 / 0.215209 (0.177829) | 3.925514 / 2.077655 (1.847859) | 1.938419 / 1.504120 (0.434300) | 1.760265 / 1.541195 (0.219071) | 1.810024 / 1.468490 (0.341534) | 0.486232 / 4.584777 (-4.098545) | 3.618747 / 3.745712 (-0.126965) | 3.206950 / 5.269862 (-2.062912) | 1.999240 / 4.565676 (-2.566436) | 0.056986 / 0.424275 (-0.367289) | 0.007193 / 0.007607 (-0.000415) | 0.469313 / 0.226044 (0.243269) | 4.688670 / 2.268929 (2.419741) | 2.400332 / 55.444624 (-53.044292) | 2.074197 / 6.876477 (-4.802279) | 2.290823 / 2.142072 (0.148751) | 0.582339 / 4.805227 (-4.222888) | 0.134127 / 6.500664 (-6.366537) | 0.061061 / 0.075469 (-0.014408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272782 / 1.841788 (-0.569006) | 19.463375 / 8.074308 (11.389067) | 14.306819 / 10.191392 (4.115427) | 0.164608 / 0.680424 (-0.515816) | 0.018626 / 0.534201 (-0.515575) | 0.395225 / 0.579283 (-0.184058) | 0.408984 / 0.434364 (-0.025380) | 0.463364 / 0.540337 (-0.076974) | 0.630425 / 1.386936 (-0.756511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006465 / 0.011353 (-0.004888) | 0.003975 / 0.011008 (-0.007033) | 0.063643 / 0.038508 (0.025134) | 0.075214 / 0.023109 (0.052105) | 0.361734 / 0.275898 (0.085836) | 0.396664 / 0.323480 (0.073184) | 0.005251 / 0.007986 (-0.002735) | 0.003249 / 0.004328 (-0.001080) | 0.063841 / 0.004250 (0.059591) | 0.054504 / 0.037052 (0.017451) | 0.374791 / 0.258489 (0.116302) | 0.399205 / 0.293841 (0.105364) | 0.031355 / 0.128546 (-0.097192) | 0.008483 / 0.075646 (-0.067163) | 0.070234 / 0.419271 (-0.349037) | 0.048336 / 0.043533 (0.004803) | 0.373484 / 0.255139 (0.118345) | 0.382174 / 0.283200 (0.098974) | 0.022560 / 0.141683 (-0.119123) | 1.449799 / 1.452155 (-0.002355) | 1.525255 / 1.492716 (0.032539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228350 / 0.018006 (0.210343) | 0.444344 / 0.000490 (0.443855) | 0.003699 / 0.000200 (0.003499) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030681 / 0.037411 (-0.006731) | 0.087340 / 0.014526 (0.072814) | 0.098636 / 0.176557 (-0.077920) | 0.151665 / 0.737135 (-0.585471) | 0.100840 / 0.296338 (-0.195498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417857 / 0.215209 (0.202648) | 4.168407 / 2.077655 (2.090752) | 2.201758 / 1.504120 (0.697638) | 1.997834 / 1.541195 (0.456639) | 2.127693 / 1.468490 (0.659202) | 0.486429 / 4.584777 (-4.098348) | 3.676335 / 3.745712 (-0.069378) | 3.226268 / 5.269862 (-2.043594) | 2.027255 / 4.565676 (-2.538422) | 0.056759 / 0.424275 (-0.367516) | 0.007628 / 0.007607 (0.000021) | 0.500482 / 0.226044 (0.274438) | 4.996236 / 2.268929 (2.727307) | 2.628884 / 55.444624 (-52.815740) | 2.347611 / 6.876477 (-4.528866) | 2.551328 / 2.142072 (0.409255) | 0.582449 / 4.805227 (-4.222778) | 0.132844 / 6.500664 (-6.367821) | 0.061791 / 0.075469 (-0.013678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373718 / 1.841788 (-0.468070) | 19.921217 / 8.074308 (11.846909) | 14.209642 / 10.191392 (4.018250) | 0.185334 / 0.680424 (-0.495090) | 0.018228 / 0.534201 (-0.515973) | 0.395549 / 0.579283 (-0.183734) | 0.404446 / 0.434364 (-0.029918) | 0.472456 / 0.540337 (-0.067882) | 0.622739 / 1.386936 (-0.764197) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006007 / 0.011353 (-0.005346) | 0.003588 / 0.011008 (-0.007420) | 0.080334 / 0.038508 (0.041826) | 0.058932 / 0.023109 (0.035823) | 0.404613 / 0.275898 (0.128715) | 0.438377 / 0.323480 (0.114897) | 0.003468 / 0.007986 (-0.004518) | 0.003702 / 0.004328 (-0.000627) | 0.062936 / 0.004250 (0.058686) | 0.047987 / 0.037052 (0.010934) | 0.411409 / 0.258489 (0.152920) | 0.450244 / 0.293841 (0.156403) | 0.027007 / 0.128546 (-0.101539) | 0.007932 / 0.075646 (-0.067714) | 0.261390 / 0.419271 (-0.157882) | 0.044992 / 0.043533 (0.001459) | 0.409730 / 0.255139 (0.154591) | 0.433331 / 0.283200 (0.150131) | 0.020446 / 0.141683 (-0.121237) | 1.425418 / 1.452155 (-0.026736) | 1.479242 / 1.492716 (-0.013475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187375 / 0.018006 (0.169368) | 0.428532 / 0.000490 (0.428043) | 0.003406 / 0.000200 (0.003206) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024390 / 0.037411 (-0.013022) | 0.072571 / 0.014526 (0.058045) | 0.083513 / 0.176557 (-0.093044) | 0.144395 / 0.737135 (-0.592741) | 0.084813 / 0.296338 (-0.211526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409176 / 0.215209 (0.193967) | 4.078082 / 2.077655 (2.000428) | 1.913596 / 1.504120 (0.409476) | 1.718470 / 1.541195 (0.177275) | 1.753106 / 1.468490 (0.284616) | 0.494167 / 4.584777 (-4.090610) | 3.029531 / 3.745712 (-0.716181) | 2.807331 / 5.269862 (-2.462531) | 1.839471 / 4.565676 (-2.726206) | 0.057169 / 0.424275 (-0.367106) | 0.006433 / 0.007607 (-0.001175) | 0.482666 / 0.226044 (0.256621) | 4.817601 / 2.268929 (2.548673) | 2.449967 / 55.444624 (-52.994658) | 2.113891 / 6.876477 (-4.762586) | 2.399293 / 2.142072 (0.257221) | 0.578903 / 4.805227 (-4.226324) | 0.124306 / 6.500664 (-6.376358) | 0.061572 / 0.075469 (-0.013897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254692 / 1.841788 (-0.587096) | 18.414049 / 8.074308 (10.339741) | 13.992059 / 10.191392 (3.800667) | 0.146671 / 0.680424 (-0.533753) | 0.016925 / 0.534201 (-0.517275) | 0.333124 / 0.579283 (-0.246159) | 0.348007 / 0.434364 (-0.086357) | 0.378519 / 0.540337 (-0.161819) | 0.532540 / 1.386936 (-0.854396) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003614 / 0.011008 (-0.007394) | 0.061707 / 0.038508 (0.023199) | 0.062874 / 0.023109 (0.039765) | 0.364760 / 0.275898 (0.088862) | 0.398136 / 0.323480 (0.074656) | 0.005598 / 0.007986 (-0.002388) | 0.002836 / 0.004328 (-0.001493) | 0.061880 / 0.004250 (0.057630) | 0.048165 / 0.037052 (0.011113) | 0.372656 / 0.258489 (0.114167) | 0.403967 / 0.293841 (0.110126) | 0.027046 / 0.128546 (-0.101501) | 0.008091 / 0.075646 (-0.067555) | 0.066783 / 0.419271 (-0.352489) | 0.041186 / 0.043533 (-0.002347) | 0.376009 / 0.255139 (0.120870) | 0.391769 / 0.283200 (0.108569) | 0.021020 / 0.141683 (-0.120663) | 1.514593 / 1.452155 (0.062438) | 1.548506 / 1.492716 (0.055790) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237610 / 0.018006 (0.219604) | 0.434274 / 0.000490 (0.433784) | 0.009720 / 0.000200 (0.009520) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025605 / 0.037411 (-0.011807) | 0.078971 / 0.014526 (0.064445) | 0.088154 / 0.176557 (-0.088403) | 0.139112 / 0.737135 (-0.598023) | 0.088890 / 0.296338 (-0.207449) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420027 / 0.215209 (0.204818) | 4.189493 / 2.077655 (2.111838) | 2.143907 / 1.504120 (0.639787) | 1.967032 / 1.541195 (0.425837) | 2.011845 / 1.468490 (0.543355) | 0.496692 / 4.584777 (-4.088085) | 3.025456 / 3.745712 (-0.720256) | 2.828436 / 5.269862 (-2.441426) | 1.860673 / 4.565676 (-2.705003) | 0.057199 / 0.424275 (-0.367076) | 0.006770 / 0.007607 (-0.000838) | 0.491281 / 0.226044 (0.265236) | 4.918065 / 2.268929 (2.649136) | 2.593172 / 55.444624 (-52.851452) | 2.250750 / 6.876477 (-4.625727) | 2.406235 / 2.142072 (0.264162) | 0.588648 / 4.805227 (-4.216579) | 0.125635 / 6.500664 (-6.375029) | 0.061697 / 0.075469 (-0.013773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374065 / 1.841788 (-0.467722) | 18.439315 / 8.074308 (10.365007) | 14.031660 / 10.191392 (3.840268) | 0.153665 / 0.680424 (-0.526759) | 0.016980 / 0.534201 (-0.517221) | 0.331799 / 0.579283 (-0.247484) | 0.343201 / 0.434364 (-0.091163) | 0.392445 / 0.540337 (-0.147892) | 0.530387 / 1.386936 (-0.856549) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008189 / 0.011353 (-0.003164) | 0.004598 / 0.011008 (-0.006410) | 0.102199 / 0.038508 (0.063691) | 0.077961 / 0.023109 (0.054852) | 0.364936 / 0.275898 (0.089038) | 0.402606 / 0.323480 (0.079126) | 0.005522 / 0.007986 (-0.002464) | 0.004007 / 0.004328 (-0.000322) | 0.071560 / 0.004250 (0.067310) | 0.055818 / 0.037052 (0.018765) | 0.378394 / 0.258489 (0.119905) | 0.428990 / 0.293841 (0.135149) | 0.043142 / 0.128546 (-0.085404) | 0.013254 / 0.075646 (-0.062392) | 0.331102 / 0.419271 (-0.088170) | 0.061407 / 0.043533 (0.017875) | 0.387397 / 0.255139 (0.132258) | 0.416062 / 0.283200 (0.132862) | 0.036330 / 0.141683 (-0.105353) | 1.735352 / 1.452155 (0.283198) | 1.773329 / 1.492716 (0.280613) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188587 / 0.018006 (0.170581) | 0.519506 / 0.000490 (0.519016) | 0.004702 / 0.000200 (0.004502) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027152 / 0.037411 (-0.010260) | 0.094296 / 0.014526 (0.079770) | 0.098155 / 0.176557 (-0.078402) | 0.162541 / 0.737135 (-0.574595) | 0.112092 / 0.296338 (-0.184246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.537555 / 0.215209 (0.322346) | 5.486821 / 2.077655 (3.409166) | 2.377127 / 1.504120 (0.873008) | 2.073205 / 1.541195 (0.532011) | 2.075130 / 1.468490 (0.606640) | 0.783779 / 4.584777 (-3.800998) | 5.029524 / 3.745712 (1.283812) | 4.382724 / 5.269862 (-0.887138) | 2.836180 / 4.565676 (-1.729496) | 0.108840 / 0.424275 (-0.315435) | 0.008123 / 0.007607 (0.000516) | 0.673460 / 0.226044 (0.447416) | 6.674030 / 2.268929 (4.405102) | 3.208922 / 55.444624 (-52.235702) | 2.464908 / 6.876477 (-4.411568) | 2.661929 / 2.142072 (0.519856) | 0.962529 / 4.805227 (-3.842698) | 0.197974 / 6.500664 (-6.302690) | 0.066656 / 0.075469 (-0.008813) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430373 / 1.841788 (-0.411415) | 21.180540 / 8.074308 (13.106232) | 19.027491 / 10.191392 (8.836099) | 0.217520 / 0.680424 (-0.462904) | 0.028038 / 0.534201 (-0.506163) | 0.435266 / 0.579283 (-0.144017) | 0.529510 / 0.434364 (0.095147) | 0.511011 / 0.540337 (-0.029327) | 0.728940 / 1.386936 (-0.657996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007883 / 0.011353 (-0.003470) | 0.004448 / 0.011008 (-0.006560) | 0.071350 / 0.038508 (0.032842) | 0.075269 / 0.023109 (0.052160) | 0.396705 / 0.275898 (0.120807) | 0.457809 / 0.323480 (0.134329) | 0.005193 / 0.007986 (-0.002792) | 0.003695 / 0.004328 (-0.000633) | 0.078087 / 0.004250 (0.073836) | 0.054276 / 0.037052 (0.017224) | 0.412184 / 0.258489 (0.153695) | 0.452400 / 0.293841 (0.158559) | 0.049762 / 0.128546 (-0.078784) | 0.013206 / 0.075646 (-0.062440) | 0.085985 / 0.419271 (-0.333287) | 0.058837 / 0.043533 (0.015304) | 0.432481 / 0.255139 (0.177342) | 0.433260 / 0.283200 (0.150060) | 0.031190 / 0.141683 (-0.110493) | 1.582707 / 1.452155 (0.130552) | 1.664457 / 1.492716 (0.171741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223639 / 0.018006 (0.205633) | 0.524388 / 0.000490 (0.523899) | 0.005489 / 0.000200 (0.005289) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030182 / 0.037411 (-0.007230) | 0.089309 / 0.014526 (0.074783) | 0.103306 / 0.176557 (-0.073250) | 0.162624 / 0.737135 (-0.574511) | 0.108957 / 0.296338 (-0.187381) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577423 / 0.215209 (0.362214) | 5.900154 / 2.077655 (3.822500) | 2.687369 / 1.504120 (1.183249) | 2.513061 / 1.541195 (0.971866) | 2.506453 / 1.468490 (1.037963) | 0.830838 / 4.584777 (-3.753939) | 5.032195 / 3.745712 (1.286483) | 4.396827 / 5.269862 (-0.873035) | 2.884230 / 4.565676 (-1.681447) | 0.102239 / 0.424275 (-0.322036) | 0.008178 / 0.007607 (0.000571) | 0.710027 / 0.226044 (0.483983) | 7.149626 / 2.268929 (4.880698) | 3.403605 / 55.444624 (-52.041019) | 2.661970 / 6.876477 (-4.214506) | 2.760227 / 2.142072 (0.618154) | 1.043981 / 4.805227 (-3.761246) | 0.195028 / 6.500664 (-6.305636) | 0.065211 / 0.075469 (-0.010258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581265 / 1.841788 (-0.260522) | 21.640230 / 8.074308 (13.565922) | 19.031860 / 10.191392 (8.840468) | 0.196903 / 0.680424 (-0.483520) | 0.027061 / 0.534201 (-0.507140) | 0.444995 / 0.579283 (-0.134288) | 0.528195 / 0.434364 (0.093831) | 0.521540 / 0.540337 (-0.018797) | 0.730204 / 1.386936 (-0.656732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n" ]
2023-08-03T10:18:32
2023-08-03T15:08:02
2023-08-03T10:24:57
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6115", "html_url": "https://github.com/huggingface/datasets/pull/6115", "diff_url": "https://github.com/huggingface/datasets/pull/6115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6115.patch", "merged_at": "2023-08-03T10:24:57" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6115/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6114/comments
https://api.github.com/repos/huggingface/datasets/issues/6114/events
https://github.com/huggingface/datasets/issues/6114
1,834,015,584
I_kwDODunzps5tUNtg
6,114
Cache not being used when loading commonvoice 8.0.0
{ "login": "clabornd", "id": 31082141, "node_id": "MDQ6VXNlcjMxMDgyMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clabornd", "html_url": "https://github.com/clabornd", "followers_url": "https://api.github.com/users/clabornd/followers", "following_url": "https://api.github.com/users/clabornd/following{/other_user}", "gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}", "starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clabornd/subscriptions", "organizations_url": "https://api.github.com/users/clabornd/orgs", "repos_url": "https://api.github.com/users/clabornd/repos", "events_url": "https://api.github.com/users/clabornd/events{/privacy}", "received_events_url": "https://api.github.com/users/clabornd/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T23:18:11
2023-08-04T17:33:11
null
NONE
null
null
null
### Describe the bug I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially: ``` dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>") ``` it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32` ### Steps to reproduce the bug Steps to reproduce the behavior: 1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` 2. dataset is updated by maintainers 3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` ### Expected behavior I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded? EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example: ``` load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en") > ... > File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str) 1937 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ... 1794 e = e.__context__ -> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info datasets==2.7.0 python==3.10.8 OS: AWS Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6114/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6113/comments
https://api.github.com/repos/huggingface/datasets/issues/6113/events
https://github.com/huggingface/datasets/issues/6113
1,833,854,030
I_kwDODunzps5tTmRO
6,113
load_dataset() fails with streamlit caching inside docker
{ "login": "fierval", "id": 987574, "node_id": "MDQ6VXNlcjk4NzU3NA==", "avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fierval", "html_url": "https://github.com/fierval", "followers_url": "https://api.github.com/users/fierval/followers", "following_url": "https://api.github.com/users/fierval/following{/other_user}", "gists_url": "https://api.github.com/users/fierval/gists{/gist_id}", "starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fierval/subscriptions", "organizations_url": "https://api.github.com/users/fierval/orgs", "repos_url": "https://api.github.com/users/fierval/repos", "events_url": "https://api.github.com/users/fierval/events{/privacy}", "received_events_url": "https://api.github.com/users/fierval/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T20:20:26
2023-08-02T20:20:26
null
NONE
null
null
null
### Describe the bug When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message: EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files Traceback: File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/home/user/app/app.py", line 62, in <module> dashboard() File "/home/user/app/app.py", line 47, in dashboard feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper return cached_func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__ return self._get_or_create_cached_value(args, kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value return self._handle_cache_miss(cache, value_key, func_args, func_kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss computed_value = self._info.func(*func_args, **func_kwargs) File "/home/user/app/hf_interface.py", line 16, in load_data hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module else get_data_patterns(base_path, download_config=self.download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None ### Steps to reproduce the bug ```python @st.cache_resource def load_data(repo_id: str, hf_token=None): """Load data from HuggingFace Hub """ hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"]) return hf_dataset ``` ### Expected behavior Expect to load. Note: works fine with datasets==2.13.1 ### Environment info datasets==2.14.2, Ubuntu bionic-based Docker container.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6113/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6112/comments
https://api.github.com/repos/huggingface/datasets/issues/6112/events
https://github.com/huggingface/datasets/issues/6112
1,833,693,299
I_kwDODunzps5tS_Bz
6,112
yaml error using push_to_hub with generated README.md
{ "login": "kevintee", "id": 1643887, "node_id": "MDQ6VXNlcjE2NDM4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevintee", "html_url": "https://github.com/kevintee", "followers_url": "https://api.github.com/users/kevintee/followers", "following_url": "https://api.github.com/users/kevintee/following{/other_user}", "gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevintee/subscriptions", "organizations_url": "https://api.github.com/users/kevintee/orgs", "repos_url": "https://api.github.com/users/kevintee/repos", "events_url": "https://api.github.com/users/kevintee/events{/privacy}", "received_events_url": "https://api.github.com/users/kevintee/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T18:21:21
2023-08-02T18:21:21
null
NONE
null
null
null
### Describe the bug When I construct a dataset with the following features: ``` features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) ``` and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error: ``` Traceback (most recent call last): File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module> build_dataset() File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset push_to_hub(dataset, "multitask_document_classification_dataset") File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub dataset.push_to_hub(f"looppayments/{dataset_name}", private=True) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub HfApi(endpoint=config.HF_ENDPOINT).upload_file( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file commit_info = self.create_commit( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit hf_raise_for_status(commit_resp, endpoint_name="commit") File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e) Bad request for commit endpoint: Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9) 7 | - 3 8 | - 224 9 | - 224 10 | dtype: float64 --------------^ 11 | - name: input_ids 12 | sequence: int64 ``` My guess is that the auto-generated yaml is unable to be parsed for some reason. ### Steps to reproduce the bug The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet: ``` from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value from PIL import Image from transformers import AutoProcessor features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) def preprocess_dataset(rows): # Get images images = [ Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"] ] encoding = processor( images, rows["tokens"], boxes=rows["bbox"], truncation=True, padding="max_length", ) encoding["tokens"] = rows["tokens"] return encoding dataset = dataset.map( preprocess_dataset, batched=True, batch_size=5, features=features, ) ``` ### Expected behavior Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error. ### Environment info - `datasets` version: 2.14.2 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6112/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6111/comments
https://api.github.com/repos/huggingface/datasets/issues/6111/events
https://github.com/huggingface/datasets/issues/6111
1,832,781,654
I_kwDODunzps5tPgdW
6,111
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
{ "login": "2catycm", "id": 41530341, "node_id": "MDQ6VXNlcjQxNTMwMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2catycm", "html_url": "https://github.com/2catycm", "followers_url": "https://api.github.com/users/2catycm/followers", "following_url": "https://api.github.com/users/2catycm/following{/other_user}", "gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}", "starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/2catycm/subscriptions", "organizations_url": "https://api.github.com/users/2catycm/orgs", "repos_url": "https://api.github.com/users/2catycm/repos", "events_url": "https://api.github.com/users/2catycm/events{/privacy}", "received_events_url": "https://api.github.com/users/2catycm/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T09:17:29
2023-08-02T09:17:29
null
NONE
null
null
null
### Describe the bug For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object. However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects. ### Steps to reproduce the bug Steps to reproduce the bug: 1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main 2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box: ```bash cd my_directory_absolute git lfs install git clone https://huggingface.co/datasets/cifar100 ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK. ``` 3. Write A python file to try to load the dataset ```python from datasets import load_dataset, load_from_disk dataset = load_from_disk("my_directory_absolute/cifar100") ``` Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead. 4. Then you will see the error reported: ```log --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[5], line 9 1 from datasets import load_dataset, load_from_disk ----> 9 dataset = load_from_disk("my_directory_absolute/cifar100") File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options) 2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) 2231 else: -> 2232 raise FileNotFoundError( 2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." 2234 ) FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory. ``` ### Expected behavior The dataset should be load successfully. ### Environment info ```bash datasets-cli env ``` -> results: ```txt Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.14.2 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6111/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6110/comments
https://api.github.com/repos/huggingface/datasets/issues/6110/events
https://github.com/huggingface/datasets/issues/6110
1,831,110,633
I_kwDODunzps5tJIfp
6,110
[BUG] Dataset initialized from in-memory data does not create cache.
{ "login": "MattYoon", "id": 57797966, "node_id": "MDQ6VXNlcjU3Nzk3OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MattYoon", "html_url": "https://github.com/MattYoon", "followers_url": "https://api.github.com/users/MattYoon/followers", "following_url": "https://api.github.com/users/MattYoon/following{/other_user}", "gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions", "organizations_url": "https://api.github.com/users/MattYoon/orgs", "repos_url": "https://api.github.com/users/MattYoon/repos", "events_url": "https://api.github.com/users/MattYoon/events{/privacy}", "received_events_url": "https://api.github.com/users/MattYoon/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-01T11:58:58
2023-08-01T12:04:57
null
NONE
null
null
null
### Describe the bug `Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`. ### Steps to reproduce the bug ```python # below code was run the second time so the map function can be loaded from cache if exists from datasets import load_dataset, Dataset dataset = load_dataset("tatsu-lab/alpaca")['train'] dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map print(len(dataset.cache_files)) # 1 # copy the exact same data but initialize from a dictionary memory_dataset = Dataset.from_dict({ 'instruction': dataset['instruction'], 'input': dataset['input'], 'output': dataset['output'], 'text': dataset['text']}) memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map print(len(memory_dataset.cache_files)) # Map: 100%|██████████| 52002[/52002] # 0 ``` ### Expected behavior The `map` function should create cache regardless of the method the `Dataset` was created. ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6110/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6109/comments
https://api.github.com/repos/huggingface/datasets/issues/6109/events
https://github.com/huggingface/datasets/issues/6109
1,830,753,793
I_kwDODunzps5tHxYB
6,109
Problems in downloading Amazon reviews from HF
{ "login": "610v4nn1", "id": 52964960, "node_id": "MDQ6VXNlcjUyOTY0OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/610v4nn1", "html_url": "https://github.com/610v4nn1", "followers_url": "https://api.github.com/users/610v4nn1/followers", "following_url": "https://api.github.com/users/610v4nn1/following{/other_user}", "gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}", "starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions", "organizations_url": "https://api.github.com/users/610v4nn1/orgs", "repos_url": "https://api.github.com/users/610v4nn1/repos", "events_url": "https://api.github.com/users/610v4nn1/events{/privacy}", "received_events_url": "https://api.github.com/users/610v4nn1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\nSee: https://huggingface.co/datasets/amazon_reviews_multi/discussions/4#64c3898db63057f1fd3ce1a0 " ]
2023-08-01T08:38:29
2023-08-02T07:12:07
2023-08-02T07:12:07
NONE
null
null
null
### Describe the bug I have a script downloading `amazon_reviews_multi`. When the download starts, I get ``` Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.43MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 928kB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.81MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s] Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s] ``` the file is clearly too small to contain the requested dataset, in fact it contains en error message: ``` <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error> ``` obviously the script fails: ``` > raise DatasetGenerationError("An error occurred while generating the dataset") from e E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE") ### Expected behavior I would expect the dataset to be downloaded and processed ### Environment info * The problem is present with both datasets 2.12.0 and 2.14.2 * python version 3.10.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6109/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6108/comments
https://api.github.com/repos/huggingface/datasets/issues/6108/events
https://github.com/huggingface/datasets/issues/6108
1,830,347,187
I_kwDODunzps5tGOGz
6,108
Loading local datasets got strangely stuck
{ "login": "LoveCatc", "id": 48412571, "node_id": "MDQ6VXNlcjQ4NDEyNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LoveCatc", "html_url": "https://github.com/LoveCatc", "followers_url": "https://api.github.com/users/LoveCatc/followers", "following_url": "https://api.github.com/users/LoveCatc/following{/other_user}", "gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}", "starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions", "organizations_url": "https://api.github.com/users/LoveCatc/orgs", "repos_url": "https://api.github.com/users/LoveCatc/repos", "events_url": "https://api.github.com/users/LoveCatc/events{/privacy}", "received_events_url": "https://api.github.com/users/LoveCatc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.", "I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G." ]
2023-08-01T02:28:06
2023-08-03T12:03:30
null
NONE
null
null
null
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train'] ``` However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way: ```python dlist = list() for _ in LIST_OF_FILE_PATHS: dlist.append(load_dataset("json", data_files=_)['train']) ds = concatenate_datasets(dlist) ``` I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error: ```bash ^C Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker task = get() File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get res = self._reader.recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt Generating train split: 92431 examples [01:23, 1104.25 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module> a = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get raise TimeoutError multiprocess.context.TimeoutError ``` I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. Thanks for your efforts and patience! Any suggestion or help would be appreciated. ### Steps to reproduce the bug 1. use load_dataset() with `data_files = LIST_OF_FILES` ### Expected behavior All the files should be smoothly loaded. ### Environment info - Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked. - `datasets` version: 2.14.2 - Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6108/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6107/comments
https://api.github.com/repos/huggingface/datasets/issues/6107/events
https://github.com/huggingface/datasets/pull/6107
1,829,625,320
PR_kwDODunzps5W0rLR
6,107
Fix deprecation of use_auth_token in file_utils
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007678 / 0.011353 (-0.003675) | 0.004233 / 0.011008 (-0.006776) | 0.095934 / 0.038508 (0.057426) | 0.064201 / 0.023109 (0.041092) | 0.345765 / 0.275898 (0.069867) | 0.383089 / 0.323480 (0.059609) | 0.004084 / 0.007986 (-0.003902) | 0.003311 / 0.004328 (-0.001017) | 0.072367 / 0.004250 (0.068117) | 0.048252 / 0.037052 (0.011200) | 0.338340 / 0.258489 (0.079851) | 0.391627 / 0.293841 (0.097786) | 0.045203 / 0.128546 (-0.083343) | 0.013494 / 0.075646 (-0.062153) | 0.314097 / 0.419271 (-0.105174) | 0.058183 / 0.043533 (0.014650) | 0.353946 / 0.255139 (0.098807) | 0.385181 / 0.283200 (0.101981) | 0.033111 / 0.141683 (-0.108572) | 1.578489 / 1.452155 (0.126335) | 1.631660 / 1.492716 (0.138944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202592 / 0.018006 (0.184586) | 0.506450 / 0.000490 (0.505961) | 0.004630 / 0.000200 (0.004430) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024761 / 0.037411 (-0.012651) | 0.086295 / 0.014526 (0.071769) | 0.094063 / 0.176557 (-0.082494) | 0.154189 / 0.737135 (-0.582947) | 0.096273 / 0.296338 (-0.200065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581731 / 0.215209 (0.366522) | 5.552020 / 2.077655 (3.474365) | 2.430800 / 1.504120 (0.926680) | 2.130864 / 1.541195 (0.589669) | 2.092802 / 1.468490 (0.624312) | 0.833956 / 4.584777 (-3.750821) | 4.840859 / 3.745712 (1.095147) | 4.267812 / 5.269862 (-1.002050) | 2.663245 / 4.565676 (-1.902432) | 0.093195 / 0.424275 (-0.331080) | 0.007942 / 0.007607 (0.000335) | 0.651457 / 0.226044 (0.425413) | 6.782986 / 2.268929 (4.514058) | 3.103307 / 55.444624 (-52.341318) | 2.373933 / 6.876477 (-4.502544) | 2.571613 / 2.142072 (0.429540) | 0.981389 / 4.805227 (-3.823839) | 0.199019 / 6.500664 (-6.301645) | 0.065828 / 0.075469 (-0.009641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429778 / 1.841788 (-0.412009) | 20.967563 / 8.074308 (12.893255) | 19.329723 / 10.191392 (9.138331) | 0.222048 / 0.680424 (-0.458376) | 0.033507 / 0.534201 (-0.500694) | 0.436801 / 0.579283 (-0.142482) | 0.530197 / 0.434364 (0.095833) | 0.491532 / 0.540337 (-0.048805) | 0.718216 / 1.386936 (-0.668720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007798 / 0.011353 (-0.003555) | 0.004748 / 0.011008 (-0.006260) | 0.070847 / 0.038508 (0.032339) | 0.069338 / 0.023109 (0.046229) | 0.400890 / 0.275898 (0.124992) | 0.429482 / 0.323480 (0.106002) | 0.006469 / 0.007986 (-0.001517) | 0.003514 / 0.004328 (-0.000814) | 0.069049 / 0.004250 (0.064798) | 0.059800 / 0.037052 (0.022748) | 0.415644 / 0.258489 (0.157155) | 0.432562 / 0.293841 (0.138721) | 0.043778 / 0.128546 (-0.084768) | 0.015141 / 0.075646 (-0.060506) | 0.081521 / 0.419271 (-0.337750) | 0.054692 / 0.043533 (0.011160) | 0.404497 / 0.255139 (0.149358) | 0.419783 / 0.283200 (0.136583) | 0.029588 / 0.141683 (-0.112094) | 1.593506 / 1.452155 (0.141351) | 1.615977 / 1.492716 (0.123261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270981 / 0.018006 (0.252975) | 0.522074 / 0.000490 (0.521584) | 0.026568 / 0.000200 (0.026368) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031551 / 0.037411 (-0.005861) | 0.086723 / 0.014526 (0.072197) | 0.103315 / 0.176557 (-0.073242) | 0.154692 / 0.737135 (-0.582443) | 0.099472 / 0.296338 (-0.196866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570238 / 0.215209 (0.355029) | 5.655963 / 2.077655 (3.578308) | 2.662670 / 1.504120 (1.158550) | 2.380903 / 1.541195 (0.839709) | 2.409467 / 1.468490 (0.940977) | 0.828055 / 4.584777 (-3.756722) | 4.964698 / 3.745712 (1.218986) | 4.299995 / 5.269862 (-0.969867) | 2.824162 / 4.565676 (-1.741514) | 0.095872 / 0.424275 (-0.328403) | 0.007907 / 0.007607 (0.000300) | 0.701595 / 0.226044 (0.475551) | 7.131965 / 2.268929 (4.863036) | 3.250554 / 55.444624 (-52.194070) | 2.531916 / 6.876477 (-4.344561) | 2.717908 / 2.142072 (0.575835) | 1.014479 / 4.805227 (-3.790748) | 0.223804 / 6.500664 (-6.276861) | 0.071893 / 0.075469 (-0.003576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541702 / 1.841788 (-0.300086) | 21.668219 / 8.074308 (13.593911) | 18.916032 / 10.191392 (8.724640) | 0.205915 / 0.680424 (-0.474508) | 0.026356 / 0.534201 (-0.507845) | 0.429122 / 0.579283 (-0.150161) | 0.506110 / 0.434364 (0.071746) | 0.510148 / 0.540337 (-0.030190) | 0.724699 / 1.386936 (-0.662237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4ca93ff86551b398c979862e7be7305725a240b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006884 / 0.011353 (-0.004469) | 0.004492 / 0.011008 (-0.006516) | 0.085439 / 0.038508 (0.046931) | 0.083905 / 0.023109 (0.060796) | 0.313604 / 0.275898 (0.037706) | 0.354683 / 0.323480 (0.031203) | 0.006535 / 0.007986 (-0.001451) | 0.004318 / 0.004328 (-0.000011) | 0.066129 / 0.004250 (0.061879) | 0.057568 / 0.037052 (0.020516) | 0.317162 / 0.258489 (0.058672) | 0.372501 / 0.293841 (0.078660) | 0.031059 / 0.128546 (-0.097488) | 0.009013 / 0.075646 (-0.066634) | 0.288794 / 0.419271 (-0.130478) | 0.053326 / 0.043533 (0.009793) | 0.314318 / 0.255139 (0.059179) | 0.357505 / 0.283200 (0.074305) | 0.027020 / 0.141683 (-0.114663) | 1.530653 / 1.452155 (0.078498) | 1.599782 / 1.492716 (0.107066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278788 / 0.018006 (0.260782) | 0.626822 / 0.000490 (0.626333) | 0.003780 / 0.000200 (0.003580) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031703 / 0.037411 (-0.005708) | 0.085654 / 0.014526 (0.071128) | 0.754858 / 0.176557 (0.578301) | 0.212251 / 0.737135 (-0.524885) | 0.171344 / 0.296338 (-0.124994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382291 / 0.215209 (0.167082) | 3.825612 / 2.077655 (1.747958) | 1.874553 / 1.504120 (0.370433) | 1.712574 / 1.541195 (0.171379) | 1.791479 / 1.468490 (0.322989) | 0.481005 / 4.584777 (-4.103772) | 3.530559 / 3.745712 (-0.215153) | 3.395305 / 5.269862 (-1.874557) | 2.133747 / 4.565676 (-2.431930) | 0.056139 / 0.424275 (-0.368136) | 0.007424 / 0.007607 (-0.000183) | 0.458321 / 0.226044 (0.232277) | 4.577665 / 2.268929 (2.308736) | 2.380233 / 55.444624 (-53.064392) | 2.004060 / 6.876477 (-4.872417) | 2.290712 / 2.142072 (0.148639) | 0.570157 / 4.805227 (-4.235070) | 0.131670 / 6.500664 (-6.368994) | 0.060684 / 0.075469 (-0.014785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294929 / 1.841788 (-0.546858) | 21.386663 / 8.074308 (13.312355) | 14.389440 / 10.191392 (4.198048) | 0.171177 / 0.680424 (-0.509247) | 0.018660 / 0.534201 (-0.515541) | 0.394385 / 0.579283 (-0.184898) | 0.424942 / 0.434364 (-0.009422) | 0.463618 / 0.540337 (-0.076719) | 0.651499 / 1.386936 (-0.735437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007079 / 0.011353 (-0.004274) | 0.004615 / 0.011008 (-0.006393) | 0.066300 / 0.038508 (0.027792) | 0.092636 / 0.023109 (0.069527) | 0.399080 / 0.275898 (0.123182) | 0.429873 / 0.323480 (0.106393) | 0.006689 / 0.007986 (-0.001297) | 0.004358 / 0.004328 (0.000029) | 0.067155 / 0.004250 (0.062905) | 0.064040 / 0.037052 (0.026988) | 0.399905 / 0.258489 (0.141416) | 0.448237 / 0.293841 (0.154397) | 0.031985 / 0.128546 (-0.096561) | 0.009053 / 0.075646 (-0.066593) | 0.071904 / 0.419271 (-0.347368) | 0.048759 / 0.043533 (0.005227) | 0.386797 / 0.255139 (0.131658) | 0.411240 / 0.283200 (0.128040) | 0.028568 / 0.141683 (-0.113115) | 1.501037 / 1.452155 (0.048882) | 1.594560 / 1.492716 (0.101844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300756 / 0.018006 (0.282750) | 0.631220 / 0.000490 (0.630730) | 0.010163 / 0.000200 (0.009963) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033716 / 0.037411 (-0.003695) | 0.093562 / 0.014526 (0.079037) | 0.106975 / 0.176557 (-0.069582) | 0.161919 / 0.737135 (-0.575216) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410392 / 0.215209 (0.195183) | 4.094411 / 2.077655 (2.016756) | 2.085868 / 1.504120 (0.581748) | 1.959589 / 1.541195 (0.418394) | 2.096683 / 1.468490 (0.628193) | 0.494593 / 4.584777 (-4.090184) | 3.854302 / 3.745712 (0.108590) | 3.742303 / 5.269862 (-1.527558) | 2.379983 / 4.565676 (-2.185693) | 0.058640 / 0.424275 (-0.365635) | 0.008092 / 0.007607 (0.000484) | 0.486957 / 0.226044 (0.260912) | 4.855784 / 2.268929 (2.586855) | 2.654029 / 55.444624 (-52.790595) | 2.237627 / 6.876477 (-4.638850) | 2.536955 / 2.142072 (0.394882) | 0.622398 / 4.805227 (-4.182829) | 0.139212 / 6.500664 (-6.361452) | 0.062805 / 0.075469 (-0.012664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374862 / 1.841788 (-0.466926) | 22.797015 / 8.074308 (14.722707) | 14.393995 / 10.191392 (4.202603) | 0.196603 / 0.680424 (-0.483821) | 0.018602 / 0.534201 (-0.515599) | 0.394568 / 0.579283 (-0.184715) | 0.408792 / 0.434364 (-0.025572) | 0.486706 / 0.540337 (-0.053631) | 0.652365 / 1.386936 (-0.734571) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5713299a88f527ea162a099c2bf2cbceada8fb86 \"CML watermark\")\n" ]
2023-07-31T16:32:01
2023-08-03T10:13:32
2023-08-03T10:04:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6107", "html_url": "https://github.com/huggingface/datasets/pull/6107", "diff_url": "https://github.com/huggingface/datasets/pull/6107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6107.patch", "merged_at": "2023-08-03T10:04:18" }
Fix issues with the deprecation of `use_auth_token` introduced by: - #5996 in functions: - `get_authentication_headers_for_url` - `request_etag` - `get_from_cache` Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588 ``` FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' ``` Related to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6107/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6106/comments
https://api.github.com/repos/huggingface/datasets/issues/6106/events
https://github.com/huggingface/datasets/issues/6106
1,829,131,223
I_kwDODunzps5tBlPX
6,106
load local json_file as dataset
{ "login": "CiaoHe", "id": 39040787, "node_id": "MDQ6VXNlcjM5MDQwNzg3", "avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CiaoHe", "html_url": "https://github.com/CiaoHe", "followers_url": "https://api.github.com/users/CiaoHe/followers", "following_url": "https://api.github.com/users/CiaoHe/following{/other_user}", "gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}", "starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions", "organizations_url": "https://api.github.com/users/CiaoHe/orgs", "repos_url": "https://api.github.com/users/CiaoHe/repos", "events_url": "https://api.github.com/users/CiaoHe/events{/privacy}", "received_events_url": "https://api.github.com/users/CiaoHe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-31T12:53:49
2023-07-31T12:53:49
null
NONE
null
null
null
### Describe the bug I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type. ### Steps to reproduce the bug 1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)` 2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double ### Expected behavior Should allow some columns are 'float' type, at least it should convert those columns to str type. I tried to avoid the error by naively convert the float item to str: ```python # if col type is not str, we need to convert it to str mapping = {} for col in keys: if isinstance(dataset[0][col], str): mapping[col] = [row.get(col) for row in dataset] else: mapping[col] = [str(row.get(col)) for row in dataset] ``` ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6106/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6105/comments
https://api.github.com/repos/huggingface/datasets/issues/6105/events
https://github.com/huggingface/datasets/pull/6105
1,829,008,430
PR_kwDODunzps5WyiJD
6,105
Fix error when loading from GCP bucket
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006706 / 0.011353 (-0.004647) | 0.004016 / 0.011008 (-0.006992) | 0.083696 / 0.038508 (0.045188) | 0.074340 / 0.023109 (0.051230) | 0.327338 / 0.275898 (0.051440) | 0.366663 / 0.323480 (0.043183) | 0.004052 / 0.007986 (-0.003934) | 0.003423 / 0.004328 (-0.000906) | 0.064576 / 0.004250 (0.060326) | 0.055037 / 0.037052 (0.017985) | 0.325089 / 0.258489 (0.066600) | 0.379986 / 0.293841 (0.086145) | 0.031614 / 0.128546 (-0.096932) | 0.008553 / 0.075646 (-0.067094) | 0.287430 / 0.419271 (-0.131841) | 0.053032 / 0.043533 (0.009499) | 0.318990 / 0.255139 (0.063851) | 0.364426 / 0.283200 (0.081226) | 0.024926 / 0.141683 (-0.116757) | 1.461835 / 1.452155 (0.009680) | 1.557172 / 1.492716 (0.064456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212430 / 0.018006 (0.194424) | 0.512891 / 0.000490 (0.512402) | 0.004772 / 0.000200 (0.004572) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027873 / 0.037411 (-0.009538) | 0.085598 / 0.014526 (0.071072) | 0.097330 / 0.176557 (-0.079226) | 0.152235 / 0.737135 (-0.584900) | 0.097787 / 0.296338 (-0.198552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384645 / 0.215209 (0.169436) | 3.841161 / 2.077655 (1.763506) | 1.863696 / 1.504120 (0.359577) | 1.685082 / 1.541195 (0.143887) | 1.772904 / 1.468490 (0.304414) | 0.480177 / 4.584777 (-4.104599) | 3.601537 / 3.745712 (-0.144175) | 3.273647 / 5.269862 (-1.996214) | 2.014415 / 4.565676 (-2.551261) | 0.056668 / 0.424275 (-0.367607) | 0.007257 / 0.007607 (-0.000350) | 0.458194 / 0.226044 (0.232150) | 4.577311 / 2.268929 (2.308382) | 2.333983 / 55.444624 (-53.110641) | 1.964508 / 6.876477 (-4.911969) | 2.193379 / 2.142072 (0.051307) | 0.577557 / 4.805227 (-4.227670) | 0.133899 / 6.500664 (-6.366765) | 0.060804 / 0.075469 (-0.014665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249490 / 1.841788 (-0.592298) | 19.791875 / 8.074308 (11.717567) | 14.418728 / 10.191392 (4.227336) | 0.167788 / 0.680424 (-0.512636) | 0.018993 / 0.534201 (-0.515208) | 0.396141 / 0.579283 (-0.183142) | 0.412427 / 0.434364 (-0.021937) | 0.456718 / 0.540337 (-0.083619) | 0.641383 / 1.386936 (-0.745553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006546 / 0.011353 (-0.004807) | 0.004059 / 0.011008 (-0.006949) | 0.064523 / 0.038508 (0.026015) | 0.074988 / 0.023109 (0.051878) | 0.388932 / 0.275898 (0.113034) | 0.424496 / 0.323480 (0.101016) | 0.005226 / 0.007986 (-0.002760) | 0.003409 / 0.004328 (-0.000920) | 0.064284 / 0.004250 (0.060034) | 0.056829 / 0.037052 (0.019777) | 0.386457 / 0.258489 (0.127968) | 0.428063 / 0.293841 (0.134222) | 0.031411 / 0.128546 (-0.097136) | 0.008577 / 0.075646 (-0.067070) | 0.070357 / 0.419271 (-0.348915) | 0.048920 / 0.043533 (0.005388) | 0.385197 / 0.255139 (0.130058) | 0.407167 / 0.283200 (0.123967) | 0.024469 / 0.141683 (-0.117214) | 1.482733 / 1.452155 (0.030578) | 1.539027 / 1.492716 (0.046311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227532 / 0.018006 (0.209526) | 0.448792 / 0.000490 (0.448302) | 0.004139 / 0.000200 (0.003939) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031004 / 0.037411 (-0.006408) | 0.088163 / 0.014526 (0.073637) | 0.101452 / 0.176557 (-0.075105) | 0.152907 / 0.737135 (-0.584229) | 0.102325 / 0.296338 (-0.194014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418092 / 0.215209 (0.202883) | 4.162277 / 2.077655 (2.084623) | 2.232987 / 1.504120 (0.728867) | 2.143583 / 1.541195 (0.602388) | 2.246142 / 1.468490 (0.777652) | 0.490181 / 4.584777 (-4.094596) | 3.631514 / 3.745712 (-0.114198) | 3.315025 / 5.269862 (-1.954837) | 2.101853 / 4.565676 (-2.463823) | 0.057905 / 0.424275 (-0.366370) | 0.007686 / 0.007607 (0.000079) | 0.489965 / 0.226044 (0.263921) | 4.894375 / 2.268929 (2.625447) | 2.655459 / 55.444624 (-52.789165) | 2.262211 / 6.876477 (-4.614266) | 2.505335 / 2.142072 (0.363263) | 0.591329 / 4.805227 (-4.213898) | 0.133554 / 6.500664 (-6.367110) | 0.061922 / 0.075469 (-0.013547) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347483 / 1.841788 (-0.494304) | 20.027011 / 8.074308 (11.952703) | 14.430737 / 10.191392 (4.239345) | 0.165767 / 0.680424 (-0.514657) | 0.018460 / 0.534201 (-0.515741) | 0.393790 / 0.579283 (-0.185494) | 0.407213 / 0.434364 (-0.027151) | 0.474459 / 0.540337 (-0.065879) | 0.635054 / 1.386936 (-0.751882) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7f575111481e2e2f4d4fc9180771797f69ebcc44 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007652 / 0.011353 (-0.003701) | 0.004581 / 0.011008 (-0.006427) | 0.101629 / 0.038508 (0.063121) | 0.090233 / 0.023109 (0.067124) | 0.392789 / 0.275898 (0.116891) | 0.432163 / 0.323480 (0.108683) | 0.004694 / 0.007986 (-0.003292) | 0.003927 / 0.004328 (-0.000401) | 0.076533 / 0.004250 (0.072282) | 0.064442 / 0.037052 (0.027390) | 0.397539 / 0.258489 (0.139050) | 0.441323 / 0.293841 (0.147482) | 0.036278 / 0.128546 (-0.092268) | 0.009810 / 0.075646 (-0.065836) | 0.343537 / 0.419271 (-0.075734) | 0.060273 / 0.043533 (0.016740) | 0.395023 / 0.255139 (0.139884) | 0.427210 / 0.283200 (0.144011) | 0.031717 / 0.141683 (-0.109966) | 1.771221 / 1.452155 (0.319066) | 1.896336 / 1.492716 (0.403620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235081 / 0.018006 (0.217075) | 0.512781 / 0.000490 (0.512292) | 0.004920 / 0.000200 (0.004721) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033525 / 0.037411 (-0.003887) | 0.104416 / 0.014526 (0.089890) | 0.115695 / 0.176557 (-0.060861) | 0.182216 / 0.737135 (-0.554919) | 0.116259 / 0.296338 (-0.180079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454817 / 0.215209 (0.239608) | 4.527753 / 2.077655 (2.450098) | 2.222273 / 1.504120 (0.718153) | 2.038448 / 1.541195 (0.497253) | 2.179444 / 1.468490 (0.710953) | 0.573665 / 4.584777 (-4.011112) | 4.504943 / 3.745712 (0.759231) | 3.848435 / 5.269862 (-1.421427) | 2.455185 / 4.565676 (-2.110491) | 0.067985 / 0.424275 (-0.356290) | 0.008719 / 0.007607 (0.001112) | 0.552405 / 0.226044 (0.326360) | 5.515251 / 2.268929 (3.246322) | 2.851557 / 55.444624 (-52.593067) | 2.463070 / 6.876477 (-4.413407) | 2.761596 / 2.142072 (0.619524) | 0.688561 / 4.805227 (-4.116667) | 0.159946 / 6.500664 (-6.340718) | 0.075435 / 0.075469 (-0.000034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505178 / 1.841788 (-0.336610) | 23.555236 / 8.074308 (15.480928) | 17.272759 / 10.191392 (7.081367) | 0.206495 / 0.680424 (-0.473928) | 0.021869 / 0.534201 (-0.512332) | 0.469271 / 0.579283 (-0.110012) | 0.469200 / 0.434364 (0.034837) | 0.542437 / 0.540337 (0.002100) | 0.792864 / 1.386936 (-0.594072) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008151 / 0.011353 (-0.003202) | 0.004992 / 0.011008 (-0.006016) | 0.079545 / 0.038508 (0.041037) | 0.100234 / 0.023109 (0.077125) | 0.492791 / 0.275898 (0.216893) | 0.511315 / 0.323480 (0.187835) | 0.006878 / 0.007986 (-0.001108) | 0.003807 / 0.004328 (-0.000522) | 0.080876 / 0.004250 (0.076625) | 0.076734 / 0.037052 (0.039681) | 0.518247 / 0.258489 (0.259758) | 0.524202 / 0.293841 (0.230361) | 0.039896 / 0.128546 (-0.088650) | 0.016581 / 0.075646 (-0.059065) | 0.101228 / 0.419271 (-0.318043) | 0.061990 / 0.043533 (0.018457) | 0.490611 / 0.255139 (0.235472) | 0.514930 / 0.283200 (0.231730) | 0.028680 / 0.141683 (-0.113002) | 1.966215 / 1.452155 (0.514061) | 2.047757 / 1.492716 (0.555040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286807 / 0.018006 (0.268801) | 0.506448 / 0.000490 (0.505959) | 0.005867 / 0.000200 (0.005667) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037141 / 0.037411 (-0.000270) | 0.113232 / 0.014526 (0.098706) | 0.121201 / 0.176557 (-0.055356) | 0.185472 / 0.737135 (-0.551663) | 0.122896 / 0.296338 (-0.173442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514491 / 0.215209 (0.299282) | 4.942457 / 2.077655 (2.864802) | 2.533519 / 1.504120 (1.029399) | 2.371011 / 1.541195 (0.829817) | 2.495604 / 1.468490 (1.027114) | 0.576224 / 4.584777 (-4.008553) | 4.368584 / 3.745712 (0.622872) | 3.885598 / 5.269862 (-1.384263) | 2.443596 / 4.565676 (-2.122080) | 0.068905 / 0.424275 (-0.355371) | 0.009171 / 0.007607 (0.001564) | 0.584977 / 0.226044 (0.358932) | 5.835220 / 2.268929 (3.566291) | 3.189037 / 55.444624 (-52.255588) | 2.753228 / 6.876477 (-4.123249) | 3.009062 / 2.142072 (0.866990) | 0.690179 / 4.805227 (-4.115048) | 0.157981 / 6.500664 (-6.342683) | 0.074518 / 0.075469 (-0.000951) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599907 / 1.841788 (-0.241880) | 23.853903 / 8.074308 (15.779595) | 17.419796 / 10.191392 (7.228404) | 0.204974 / 0.680424 (-0.475450) | 0.022014 / 0.534201 (-0.512187) | 0.473379 / 0.579283 (-0.105905) | 0.461346 / 0.434364 (0.026982) | 0.564881 / 0.540337 (0.024543) | 0.752933 / 1.386936 (-0.634003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f49c9ca993fa600fae0e327636d52657328e7ffb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004805) | 0.004020 / 0.011008 (-0.006988) | 0.086828 / 0.038508 (0.048320) | 0.072924 / 0.023109 (0.049815) | 0.312847 / 0.275898 (0.036949) | 0.344605 / 0.323480 (0.021125) | 0.004117 / 0.007986 (-0.003868) | 0.004365 / 0.004328 (0.000037) | 0.066755 / 0.004250 (0.062505) | 0.053248 / 0.037052 (0.016195) | 0.315744 / 0.258489 (0.057255) | 0.362426 / 0.293841 (0.068585) | 0.030732 / 0.128546 (-0.097814) | 0.008516 / 0.075646 (-0.067130) | 0.289927 / 0.419271 (-0.129345) | 0.052115 / 0.043533 (0.008582) | 0.308026 / 0.255139 (0.052887) | 0.343115 / 0.283200 (0.059915) | 0.024131 / 0.141683 (-0.117551) | 1.464290 / 1.452155 (0.012135) | 1.559359 / 1.492716 (0.066642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216744 / 0.018006 (0.198738) | 0.473156 / 0.000490 (0.472666) | 0.004176 / 0.000200 (0.003977) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028500 / 0.037411 (-0.008911) | 0.083892 / 0.014526 (0.069366) | 0.131851 / 0.176557 (-0.044705) | 0.162202 / 0.737135 (-0.574933) | 0.127989 / 0.296338 (-0.168349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404555 / 0.215209 (0.189346) | 4.035989 / 2.077655 (1.958334) | 2.025174 / 1.504120 (0.521054) | 1.835785 / 1.541195 (0.294590) | 1.909819 / 1.468490 (0.441329) | 0.475352 / 4.584777 (-4.109425) | 3.548055 / 3.745712 (-0.197657) | 3.234782 / 5.269862 (-2.035080) | 2.010305 / 4.565676 (-2.555371) | 0.056507 / 0.424275 (-0.367768) | 0.007259 / 0.007607 (-0.000348) | 0.482021 / 0.226044 (0.255977) | 4.818559 / 2.268929 (2.549631) | 2.528765 / 55.444624 (-52.915860) | 2.159804 / 6.876477 (-4.716673) | 2.380640 / 2.142072 (0.238567) | 0.585005 / 4.805227 (-4.220222) | 0.133811 / 6.500664 (-6.366853) | 0.060686 / 0.075469 (-0.014783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260902 / 1.841788 (-0.580886) | 19.500215 / 8.074308 (11.425907) | 14.164698 / 10.191392 (3.973306) | 0.172492 / 0.680424 (-0.507932) | 0.018221 / 0.534201 (-0.515980) | 0.392609 / 0.579283 (-0.186674) | 0.423265 / 0.434364 (-0.011099) | 0.454705 / 0.540337 (-0.085633) | 0.639856 / 1.386936 (-0.747080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006656 / 0.011353 (-0.004697) | 0.003903 / 0.011008 (-0.007106) | 0.063780 / 0.038508 (0.025272) | 0.076848 / 0.023109 (0.053739) | 0.379429 / 0.275898 (0.103531) | 0.442554 / 0.323480 (0.119074) | 0.005327 / 0.007986 (-0.002658) | 0.003318 / 0.004328 (-0.001010) | 0.064307 / 0.004250 (0.060056) | 0.057183 / 0.037052 (0.020131) | 0.398163 / 0.258489 (0.139674) | 0.448532 / 0.293841 (0.154691) | 0.031322 / 0.128546 (-0.097224) | 0.008462 / 0.075646 (-0.067184) | 0.070354 / 0.419271 (-0.348917) | 0.048420 / 0.043533 (0.004887) | 0.368304 / 0.255139 (0.113165) | 0.428786 / 0.283200 (0.145587) | 0.023921 / 0.141683 (-0.117762) | 1.499281 / 1.452155 (0.047126) | 1.554448 / 1.492716 (0.061731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238830 / 0.018006 (0.220824) | 0.464196 / 0.000490 (0.463706) | 0.004812 / 0.000200 (0.004613) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031642 / 0.037411 (-0.005770) | 0.089205 / 0.014526 (0.074679) | 0.101577 / 0.176557 (-0.074980) | 0.154993 / 0.737135 (-0.582142) | 0.102935 / 0.296338 (-0.193403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415218 / 0.215209 (0.200009) | 4.137711 / 2.077655 (2.060056) | 2.128757 / 1.504120 (0.624637) | 1.961086 / 1.541195 (0.419891) | 2.047552 / 1.468490 (0.579061) | 0.486953 / 4.584777 (-4.097824) | 3.587851 / 3.745712 (-0.157861) | 3.280771 / 5.269862 (-1.989090) | 2.016980 / 4.565676 (-2.548697) | 0.057284 / 0.424275 (-0.366991) | 0.007705 / 0.007607 (0.000097) | 0.492242 / 0.226044 (0.266197) | 4.923213 / 2.268929 (2.654285) | 2.672528 / 55.444624 (-52.772097) | 2.292862 / 6.876477 (-4.583614) | 2.517410 / 2.142072 (0.375337) | 0.614798 / 4.805227 (-4.190429) | 0.149642 / 6.500664 (-6.351023) | 0.062898 / 0.075469 (-0.012571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323266 / 1.841788 (-0.518522) | 19.891504 / 8.074308 (11.817196) | 14.115069 / 10.191392 (3.923677) | 0.169859 / 0.680424 (-0.510564) | 0.018538 / 0.534201 (-0.515663) | 0.398456 / 0.579283 (-0.180827) | 0.410111 / 0.434364 (-0.024253) | 0.483198 / 0.540337 (-0.057139) | 0.639283 / 1.386936 (-0.747653) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#01e2194f2aab6aa98686a2069ee5201b69a53c14 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007731 / 0.011353 (-0.003622) | 0.004064 / 0.011008 (-0.006944) | 0.095261 / 0.038508 (0.056753) | 0.081594 / 0.023109 (0.058485) | 0.390413 / 0.275898 (0.114515) | 0.415542 / 0.323480 (0.092063) | 0.006031 / 0.007986 (-0.001954) | 0.003817 / 0.004328 (-0.000512) | 0.066381 / 0.004250 (0.062131) | 0.058262 / 0.037052 (0.021210) | 0.383626 / 0.258489 (0.125137) | 0.443237 / 0.293841 (0.149396) | 0.034358 / 0.128546 (-0.094188) | 0.010002 / 0.075646 (-0.065644) | 0.317472 / 0.419271 (-0.101800) | 0.057428 / 0.043533 (0.013895) | 0.393929 / 0.255139 (0.138790) | 0.444572 / 0.283200 (0.161373) | 0.026295 / 0.141683 (-0.115388) | 1.603639 / 1.452155 (0.151484) | 1.707750 / 1.492716 (0.215034) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222171 / 0.018006 (0.204165) | 0.491762 / 0.000490 (0.491272) | 0.003389 / 0.000200 (0.003189) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029420 / 0.037411 (-0.007991) | 0.086201 / 0.014526 (0.071676) | 0.100150 / 0.176557 (-0.076406) | 0.162338 / 0.737135 (-0.574797) | 0.099349 / 0.296338 (-0.196989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445976 / 0.215209 (0.230767) | 4.460197 / 2.077655 (2.382542) | 2.211767 / 1.504120 (0.707647) | 1.988740 / 1.541195 (0.447545) | 2.052289 / 1.468490 (0.583799) | 0.570321 / 4.584777 (-4.014456) | 4.148777 / 3.745712 (0.403065) | 3.750977 / 5.269862 (-1.518885) | 2.309443 / 4.565676 (-2.256234) | 0.064552 / 0.424275 (-0.359724) | 0.008167 / 0.007607 (0.000560) | 0.523283 / 0.226044 (0.297238) | 5.349347 / 2.268929 (3.080419) | 2.710292 / 55.444624 (-52.734332) | 2.344252 / 6.876477 (-4.532225) | 2.549903 / 2.142072 (0.407831) | 0.665942 / 4.805227 (-4.139285) | 0.154108 / 6.500664 (-6.346556) | 0.070181 / 0.075469 (-0.005289) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455733 / 1.841788 (-0.386054) | 21.846958 / 8.074308 (13.772650) | 15.133865 / 10.191392 (4.942473) | 0.199009 / 0.680424 (-0.481415) | 0.021299 / 0.534201 (-0.512902) | 0.421555 / 0.579283 (-0.157729) | 0.437639 / 0.434364 (0.003275) | 0.498568 / 0.540337 (-0.041769) | 0.719649 / 1.386936 (-0.667287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007858 / 0.011353 (-0.003495) | 0.004629 / 0.011008 (-0.006380) | 0.075701 / 0.038508 (0.037193) | 0.084425 / 0.023109 (0.061316) | 0.436650 / 0.275898 (0.160752) | 0.466046 / 0.323480 (0.142566) | 0.006042 / 0.007986 (-0.001944) | 0.003834 / 0.004328 (-0.000495) | 0.074729 / 0.004250 (0.070478) | 0.065983 / 0.037052 (0.028931) | 0.447239 / 0.258489 (0.188750) | 0.466728 / 0.293841 (0.172887) | 0.035814 / 0.128546 (-0.092733) | 0.009919 / 0.075646 (-0.065727) | 0.081151 / 0.419271 (-0.338120) | 0.057256 / 0.043533 (0.013723) | 0.435609 / 0.255139 (0.180470) | 0.448901 / 0.283200 (0.165701) | 0.026325 / 0.141683 (-0.115357) | 1.745658 / 1.452155 (0.293503) | 1.804137 / 1.492716 (0.311421) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302551 / 0.018006 (0.284544) | 0.498438 / 0.000490 (0.497948) | 0.038562 / 0.000200 (0.038362) | 0.000411 / 0.000054 (0.000356) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035573 / 0.037411 (-0.001839) | 0.104957 / 0.014526 (0.090431) | 0.117208 / 0.176557 (-0.059349) | 0.178935 / 0.737135 (-0.558200) | 0.124577 / 0.296338 (-0.171761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467076 / 0.215209 (0.251867) | 4.698852 / 2.077655 (2.621197) | 2.453389 / 1.504120 (0.949269) | 2.257378 / 1.541195 (0.716183) | 2.338615 / 1.468490 (0.870125) | 0.542379 / 4.584777 (-4.042398) | 4.066895 / 3.745712 (0.321183) | 3.689540 / 5.269862 (-1.580321) | 2.268997 / 4.565676 (-2.296679) | 0.064754 / 0.424275 (-0.359521) | 0.008866 / 0.007607 (0.001259) | 0.546732 / 0.226044 (0.320687) | 5.487765 / 2.268929 (3.218836) | 2.974126 / 55.444624 (-52.470498) | 2.585492 / 6.876477 (-4.290985) | 2.754417 / 2.142072 (0.612345) | 0.652045 / 4.805227 (-4.153183) | 0.145597 / 6.500664 (-6.355067) | 0.065415 / 0.075469 (-0.010054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553970 / 1.841788 (-0.287818) | 22.300954 / 8.074308 (14.226646) | 15.640990 / 10.191392 (5.449598) | 0.170903 / 0.680424 (-0.509521) | 0.021750 / 0.534201 (-0.512451) | 0.455316 / 0.579283 (-0.123967) | 0.455051 / 0.434364 (0.020687) | 0.536174 / 0.540337 (-0.004164) | 0.735930 / 1.386936 (-0.651006) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f68139846c26b43631bd235114854f4bf6cb9954 \"CML watermark\")\n" ]
2023-07-31T11:44:46
2023-08-01T10:48:52
2023-08-01T10:38:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6105", "html_url": "https://github.com/huggingface/datasets/pull/6105", "diff_url": "https://github.com/huggingface/datasets/pull/6105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6105.patch", "merged_at": "2023-08-01T10:38:54" }
Fix `resolve_pattern` for filesystems with tuple protocol. Fix #6100. The bug code lines were introduced by: - #6028
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6105/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6104/comments
https://api.github.com/repos/huggingface/datasets/issues/6104/events
https://github.com/huggingface/datasets/issues/6104
1,828,959,107
I_kwDODunzps5tA7OD
6,104
HF Datasets data access is extremely slow even when in memory
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462" ]
2023-07-31T11:12:19
2023-08-01T11:22:43
null
CONTRIBUTOR
null
null
null
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long? It's faster to produce the dataset from scratch than to access it from HF Datasets! ### Steps to reproduce the bug I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1). ```python #!/usr/bin/env python3 import sys import time import torch from datasets import load_dataset def main(dataset_name): # Start the timer start_time = time.time() # Load the dataset from Hugging Face Hub dataset = load_dataset(dataset_name) # Set the dataset format as torch dataset.set_format(type="torch") # Perform an identity map dataset = dataset.map(lambda example: example, batched=True, batch_size=20) # End the timer end_time = time.time() # Print the time taken print(f"Time taken: {end_time - start_time:.2f} seconds") if __name__ == "__main__": dataset_name = "NightMachinery/hf_datasets_bug1" print(f"dataset_name: {dataset_name}") main(dataset_name) ``` ### Expected behavior _ ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6104/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6103/comments
https://api.github.com/repos/huggingface/datasets/issues/6103/events
https://github.com/huggingface/datasets/pull/6103
1,828,515,165
PR_kwDODunzps5Ww2gV
6,103
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6103). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006528 / 0.011353 (-0.004825) | 0.003909 / 0.011008 (-0.007099) | 0.083954 / 0.038508 (0.045446) | 0.070513 / 0.023109 (0.047404) | 0.344362 / 0.275898 (0.068464) | 0.370278 / 0.323480 (0.046798) | 0.005395 / 0.007986 (-0.002591) | 0.003323 / 0.004328 (-0.001005) | 0.064538 / 0.004250 (0.060288) | 0.055616 / 0.037052 (0.018564) | 0.353590 / 0.258489 (0.095101) | 0.382159 / 0.293841 (0.088318) | 0.031133 / 0.128546 (-0.097414) | 0.008429 / 0.075646 (-0.067217) | 0.288665 / 0.419271 (-0.130606) | 0.052626 / 0.043533 (0.009093) | 0.347676 / 0.255139 (0.092537) | 0.363726 / 0.283200 (0.080526) | 0.021956 / 0.141683 (-0.119727) | 1.506091 / 1.452155 (0.053936) | 1.563940 / 1.492716 (0.071223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207658 / 0.018006 (0.189652) | 0.473411 / 0.000490 (0.472922) | 0.005437 / 0.000200 (0.005237) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027769 / 0.037411 (-0.009643) | 0.082566 / 0.014526 (0.068040) | 0.092700 / 0.176557 (-0.083857) | 0.152589 / 0.737135 (-0.584546) | 0.093772 / 0.296338 (-0.202566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401072 / 0.215209 (0.185863) | 3.997922 / 2.077655 (1.920267) | 2.028223 / 1.504120 (0.524103) | 1.845229 / 1.541195 (0.304035) | 1.883980 / 1.468490 (0.415489) | 0.485112 / 4.584777 (-4.099665) | 3.657048 / 3.745712 (-0.088664) | 4.998475 / 5.269862 (-0.271386) | 3.007417 / 4.565676 (-1.558259) | 0.057003 / 0.424275 (-0.367272) | 0.007270 / 0.007607 (-0.000338) | 0.482220 / 0.226044 (0.256176) | 4.817560 / 2.268929 (2.548631) | 2.484285 / 55.444624 (-52.960340) | 2.163327 / 6.876477 (-4.713149) | 2.326412 / 2.142072 (0.184339) | 0.600349 / 4.805227 (-4.204878) | 0.134245 / 6.500664 (-6.366419) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281440 / 1.841788 (-0.560347) | 19.165591 / 8.074308 (11.091283) | 14.007728 / 10.191392 (3.816336) | 0.168367 / 0.680424 (-0.512057) | 0.018149 / 0.534201 (-0.516052) | 0.391688 / 0.579283 (-0.187595) | 0.414528 / 0.434364 (-0.019836) | 0.456964 / 0.540337 (-0.083373) | 0.613807 / 1.386936 (-0.773129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004851) | 0.003956 / 0.011008 (-0.007052) | 0.064297 / 0.038508 (0.025789) | 0.073430 / 0.023109 (0.050321) | 0.364113 / 0.275898 (0.088215) | 0.389021 / 0.323480 (0.065541) | 0.005375 / 0.007986 (-0.002611) | 0.003363 / 0.004328 (-0.000966) | 0.064404 / 0.004250 (0.060153) | 0.056664 / 0.037052 (0.019612) | 0.365504 / 0.258489 (0.107015) | 0.398477 / 0.293841 (0.104636) | 0.031739 / 0.128546 (-0.096807) | 0.008663 / 0.075646 (-0.066984) | 0.070757 / 0.419271 (-0.348515) | 0.051014 / 0.043533 (0.007481) | 0.368287 / 0.255139 (0.113148) | 0.382941 / 0.283200 (0.099742) | 0.024642 / 0.141683 (-0.117041) | 1.516721 / 1.452155 (0.064567) | 1.557625 / 1.492716 (0.064908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208248 / 0.018006 (0.190242) | 0.443560 / 0.000490 (0.443070) | 0.004004 / 0.000200 (0.003805) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006295) | 0.086814 / 0.014526 (0.072288) | 0.099111 / 0.176557 (-0.077445) | 0.155032 / 0.737135 (-0.582104) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413080 / 0.215209 (0.197871) | 4.115546 / 2.077655 (2.037891) | 2.162073 / 1.504120 (0.657953) | 2.008107 / 1.541195 (0.466912) | 2.052317 / 1.468490 (0.583827) | 0.485158 / 4.584777 (-4.099619) | 3.617478 / 3.745712 (-0.128234) | 5.030564 / 5.269862 (-0.239298) | 2.787812 / 4.565676 (-1.777865) | 0.057466 / 0.424275 (-0.366809) | 0.007656 / 0.007607 (0.000049) | 0.490037 / 0.226044 (0.263993) | 4.887896 / 2.268929 (2.618968) | 2.639644 / 55.444624 (-52.804981) | 2.258051 / 6.876477 (-4.618426) | 2.417573 / 2.142072 (0.275500) | 0.604473 / 4.805227 (-4.200754) | 0.134770 / 6.500664 (-6.365894) | 0.061709 / 0.075469 (-0.013760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342500 / 1.841788 (-0.499288) | 19.354990 / 8.074308 (11.280682) | 14.161975 / 10.191392 (3.970583) | 0.157084 / 0.680424 (-0.523339) | 0.018227 / 0.534201 (-0.515974) | 0.391819 / 0.579283 (-0.187464) | 0.399157 / 0.434364 (-0.035207) | 0.460582 / 0.540337 (-0.079756) | 0.612183 / 1.386936 (-0.774753) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b20f6a82410dd47e89585bb932616a22e0eaf2e6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009318 / 0.011353 (-0.002035) | 0.005515 / 0.011008 (-0.005493) | 0.108532 / 0.038508 (0.070024) | 0.103583 / 0.023109 (0.080473) | 0.419249 / 0.275898 (0.143351) | 0.453573 / 0.323480 (0.130093) | 0.006601 / 0.007986 (-0.001384) | 0.005297 / 0.004328 (0.000968) | 0.082737 / 0.004250 (0.078487) | 0.064708 / 0.037052 (0.027656) | 0.425679 / 0.258489 (0.167190) | 0.462028 / 0.293841 (0.168187) | 0.048104 / 0.128546 (-0.080442) | 0.014069 / 0.075646 (-0.061577) | 0.377780 / 0.419271 (-0.041491) | 0.067510 / 0.043533 (0.023977) | 0.422421 / 0.255139 (0.167282) | 0.447127 / 0.283200 (0.163927) | 0.037745 / 0.141683 (-0.103938) | 1.855306 / 1.452155 (0.403152) | 1.943876 / 1.492716 (0.451160) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280161 / 0.018006 (0.262155) | 0.598001 / 0.000490 (0.597512) | 0.001130 / 0.000200 (0.000930) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036064 / 0.037411 (-0.001347) | 0.113256 / 0.014526 (0.098730) | 0.120598 / 0.176557 (-0.055959) | 0.191386 / 0.737135 (-0.545750) | 0.118125 / 0.296338 (-0.178214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616887 / 0.215209 (0.401678) | 6.085498 / 2.077655 (4.007844) | 2.639428 / 1.504120 (1.135308) | 2.215444 / 1.541195 (0.674249) | 2.311990 / 1.468490 (0.843500) | 0.820539 / 4.584777 (-3.764238) | 5.306010 / 3.745712 (1.560298) | 4.731726 / 5.269862 (-0.538136) | 3.053933 / 4.565676 (-1.511744) | 0.098862 / 0.424275 (-0.325413) | 0.009456 / 0.007607 (0.001849) | 0.725455 / 0.226044 (0.499411) | 7.367385 / 2.268929 (5.098457) | 3.464921 / 55.444624 (-51.979703) | 2.833868 / 6.876477 (-4.042608) | 3.033008 / 2.142072 (0.890935) | 1.036751 / 4.805227 (-3.768476) | 0.243646 / 6.500664 (-6.257018) | 0.081079 / 0.075469 (0.005610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584695 / 1.841788 (-0.257093) | 25.150355 / 8.074308 (17.076047) | 21.826622 / 10.191392 (11.635230) | 0.212502 / 0.680424 (-0.467921) | 0.029865 / 0.534201 (-0.504335) | 0.496814 / 0.579283 (-0.082470) | 0.611959 / 0.434364 (0.177595) | 0.550434 / 0.540337 (0.010097) | 0.800897 / 1.386936 (-0.586039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005236 / 0.011008 (-0.005772) | 0.082402 / 0.038508 (0.043894) | 0.090578 / 0.023109 (0.067468) | 0.487302 / 0.275898 (0.211404) | 0.523639 / 0.323480 (0.200159) | 0.006684 / 0.007986 (-0.001302) | 0.004306 / 0.004328 (-0.000023) | 0.083273 / 0.004250 (0.079023) | 0.068585 / 0.037052 (0.031532) | 0.487751 / 0.258489 (0.229262) | 0.538972 / 0.293841 (0.245131) | 0.048915 / 0.128546 (-0.079632) | 0.014312 / 0.075646 (-0.061335) | 0.091863 / 0.419271 (-0.327409) | 0.066114 / 0.043533 (0.022581) | 0.483552 / 0.255139 (0.228413) | 0.522250 / 0.283200 (0.239050) | 0.038533 / 0.141683 (-0.103150) | 1.803834 / 1.452155 (0.351680) | 1.891927 / 1.492716 (0.399211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336662 / 0.018006 (0.318656) | 0.611408 / 0.000490 (0.610918) | 0.014310 / 0.000200 (0.014110) | 0.000152 / 0.000054 (0.000097) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034755 / 0.037411 (-0.002656) | 0.101008 / 0.014526 (0.086483) | 0.124530 / 0.176557 (-0.052026) | 0.179844 / 0.737135 (-0.557292) | 0.125027 / 0.296338 (-0.171312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618341 / 0.215209 (0.403132) | 6.146848 / 2.077655 (4.069193) | 2.893305 / 1.504120 (1.389185) | 2.608722 / 1.541195 (1.067528) | 2.671276 / 1.468490 (1.202786) | 0.860096 / 4.584777 (-3.724681) | 5.440671 / 3.745712 (1.694959) | 4.776958 / 5.269862 (-0.492903) | 3.098300 / 4.565676 (-1.467376) | 0.098664 / 0.424275 (-0.325611) | 0.009270 / 0.007607 (0.001663) | 0.712780 / 0.226044 (0.486735) | 7.199721 / 2.268929 (4.930793) | 3.620723 / 55.444624 (-51.823902) | 3.052218 / 6.876477 (-3.824259) | 3.321093 / 2.142072 (1.179021) | 1.070992 / 4.805227 (-3.734235) | 0.224091 / 6.500664 (-6.276573) | 0.083395 / 0.075469 (0.007926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716867 / 1.841788 (-0.124921) | 25.534617 / 8.074308 (17.460309) | 25.221014 / 10.191392 (15.029621) | 0.248098 / 0.680424 (-0.432326) | 0.029659 / 0.534201 (-0.504542) | 0.492929 / 0.579283 (-0.086355) | 0.618253 / 0.434364 (0.183889) | 0.577108 / 0.540337 (0.036771) | 0.803188 / 1.386936 (-0.583748) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#584db360eed9155e173b199ba5fc037562b7b862 \"CML watermark\")\n" ]
2023-07-31T06:44:05
2023-07-31T06:55:58
2023-07-31T06:45:41
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6103", "html_url": "https://github.com/huggingface/datasets/pull/6103", "diff_url": "https://github.com/huggingface/datasets/pull/6103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6103.patch", "merged_at": "2023-07-31T06:45:41" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6103/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6102/comments
https://api.github.com/repos/huggingface/datasets/issues/6102/events
https://github.com/huggingface/datasets/pull/6102
1,828,494,896
PR_kwDODunzps5WwyGy
6,102
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006517 / 0.011353 (-0.004836) | 0.004217 / 0.011008 (-0.006792) | 0.083162 / 0.038508 (0.044654) | 0.074476 / 0.023109 (0.051367) | 0.321193 / 0.275898 (0.045295) | 0.358348 / 0.323480 (0.034868) | 0.005531 / 0.007986 (-0.002455) | 0.003621 / 0.004328 (-0.000707) | 0.063819 / 0.004250 (0.059568) | 0.056524 / 0.037052 (0.019471) | 0.322145 / 0.258489 (0.063656) | 0.371415 / 0.293841 (0.077574) | 0.030612 / 0.128546 (-0.097934) | 0.008907 / 0.075646 (-0.066739) | 0.289451 / 0.419271 (-0.129821) | 0.051959 / 0.043533 (0.008426) | 0.317729 / 0.255139 (0.062590) | 0.339750 / 0.283200 (0.056550) | 0.022430 / 0.141683 (-0.119253) | 1.487661 / 1.452155 (0.035506) | 1.554916 / 1.492716 (0.062199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296673 / 0.018006 (0.278667) | 0.599183 / 0.000490 (0.598694) | 0.002524 / 0.000200 (0.002324) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027898 / 0.037411 (-0.009514) | 0.080870 / 0.014526 (0.066344) | 0.094894 / 0.176557 (-0.081662) | 0.152350 / 0.737135 (-0.584785) | 0.095765 / 0.296338 (-0.200573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415442 / 0.215209 (0.200233) | 4.161155 / 2.077655 (2.083500) | 2.117061 / 1.504120 (0.612941) | 1.937846 / 1.541195 (0.396651) | 1.979635 / 1.468490 (0.511145) | 0.488381 / 4.584777 (-4.096396) | 3.509836 / 3.745712 (-0.235876) | 3.833074 / 5.269862 (-1.436788) | 2.307536 / 4.565676 (-2.258141) | 0.057059 / 0.424275 (-0.367216) | 0.007366 / 0.007607 (-0.000241) | 0.487752 / 0.226044 (0.261708) | 4.869406 / 2.268929 (2.600478) | 2.594775 / 55.444624 (-52.849849) | 2.191712 / 6.876477 (-4.684765) | 2.413220 / 2.142072 (0.271147) | 0.584513 / 4.805227 (-4.220714) | 0.132162 / 6.500664 (-6.368502) | 0.061059 / 0.075469 (-0.014410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245178 / 1.841788 (-0.596610) | 20.624563 / 8.074308 (12.550255) | 14.675545 / 10.191392 (4.484153) | 0.165838 / 0.680424 (-0.514586) | 0.018700 / 0.534201 (-0.515501) | 0.392475 / 0.579283 (-0.186808) | 0.399884 / 0.434364 (-0.034480) | 0.457478 / 0.540337 (-0.082859) | 0.624553 / 1.386936 (-0.762383) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006716 / 0.011353 (-0.004637) | 0.004308 / 0.011008 (-0.006700) | 0.064495 / 0.038508 (0.025987) | 0.083194 / 0.023109 (0.060085) | 0.371994 / 0.275898 (0.096096) | 0.433045 / 0.323480 (0.109566) | 0.005535 / 0.007986 (-0.002450) | 0.003469 / 0.004328 (-0.000859) | 0.064342 / 0.004250 (0.060092) | 0.059362 / 0.037052 (0.022309) | 0.393819 / 0.258489 (0.135330) | 0.442591 / 0.293841 (0.148750) | 0.031594 / 0.128546 (-0.096952) | 0.008943 / 0.075646 (-0.066703) | 0.070689 / 0.419271 (-0.348582) | 0.049219 / 0.043533 (0.005686) | 0.361568 / 0.255139 (0.106429) | 0.417085 / 0.283200 (0.133886) | 0.025112 / 0.141683 (-0.116571) | 1.497204 / 1.452155 (0.045049) | 1.552781 / 1.492716 (0.060064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325254 / 0.018006 (0.307248) | 0.528399 / 0.000490 (0.527909) | 0.007429 / 0.000200 (0.007229) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029908 / 0.037411 (-0.007504) | 0.087114 / 0.014526 (0.072588) | 0.103366 / 0.176557 (-0.073191) | 0.155145 / 0.737135 (-0.581990) | 0.103458 / 0.296338 (-0.192880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.093327 / 2.077655 (2.015673) | 2.154115 / 1.504120 (0.649995) | 1.953492 / 1.541195 (0.412297) | 2.021532 / 1.468490 (0.553042) | 0.478928 / 4.584777 (-4.105849) | 3.515287 / 3.745712 (-0.230426) | 4.976239 / 5.269862 (-0.293623) | 2.832803 / 4.565676 (-1.732873) | 0.057239 / 0.424275 (-0.367036) | 0.007718 / 0.007607 (0.000111) | 0.484102 / 0.226044 (0.258057) | 4.833020 / 2.268929 (2.564092) | 2.564550 / 55.444624 (-52.880074) | 2.268969 / 6.876477 (-4.607508) | 2.513308 / 2.142072 (0.371235) | 0.582822 / 4.805227 (-4.222406) | 0.133989 / 6.500664 (-6.366675) | 0.062078 / 0.075469 (-0.013391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.393766 / 1.841788 (-0.448021) | 20.224546 / 8.074308 (12.150238) | 14.359438 / 10.191392 (4.168046) | 0.166358 / 0.680424 (-0.514066) | 0.018840 / 0.534201 (-0.515361) | 0.393206 / 0.579283 (-0.186077) | 0.404220 / 0.434364 (-0.030144) | 0.462346 / 0.540337 (-0.077992) | 0.603078 / 1.386936 (-0.783858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006835 / 0.011353 (-0.004518) | 0.004530 / 0.011008 (-0.006478) | 0.087506 / 0.038508 (0.048997) | 0.088289 / 0.023109 (0.065180) | 0.351575 / 0.275898 (0.075677) | 0.391873 / 0.323480 (0.068393) | 0.005627 / 0.007986 (-0.002359) | 0.003735 / 0.004328 (-0.000594) | 0.065747 / 0.004250 (0.061497) | 0.058779 / 0.037052 (0.021726) | 0.358076 / 0.258489 (0.099587) | 0.408466 / 0.293841 (0.114626) | 0.031369 / 0.128546 (-0.097178) | 0.008807 / 0.075646 (-0.066839) | 0.293253 / 0.419271 (-0.126019) | 0.052950 / 0.043533 (0.009417) | 0.350411 / 0.255139 (0.095272) | 0.384827 / 0.283200 (0.101627) | 0.026219 / 0.141683 (-0.115464) | 1.464290 / 1.452155 (0.012136) | 1.549688 / 1.492716 (0.056972) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270354 / 0.018006 (0.252348) | 0.593436 / 0.000490 (0.592946) | 0.003872 / 0.000200 (0.003673) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031625 / 0.037411 (-0.005787) | 0.092599 / 0.014526 (0.078073) | 0.104619 / 0.176557 (-0.071938) | 0.163183 / 0.737135 (-0.573952) | 0.103245 / 0.296338 (-0.193094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390213 / 0.215209 (0.175004) | 3.894519 / 2.077655 (1.816864) | 1.905739 / 1.504120 (0.401619) | 1.728873 / 1.541195 (0.187678) | 1.838692 / 1.468490 (0.370202) | 0.484730 / 4.584777 (-4.100047) | 3.706749 / 3.745712 (-0.038963) | 5.572311 / 5.269862 (0.302449) | 3.389949 / 4.565676 (-1.175727) | 0.057315 / 0.424275 (-0.366960) | 0.007475 / 0.007607 (-0.000132) | 0.464690 / 0.226044 (0.238645) | 4.622242 / 2.268929 (2.353314) | 2.380957 / 55.444624 (-53.063667) | 2.038225 / 6.876477 (-4.838251) | 2.358881 / 2.142072 (0.216809) | 0.606358 / 4.805227 (-4.198869) | 0.133584 / 6.500664 (-6.367080) | 0.061894 / 0.075469 (-0.013575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259575 / 1.841788 (-0.582213) | 20.915216 / 8.074308 (12.840908) | 14.971952 / 10.191392 (4.780560) | 0.160206 / 0.680424 (-0.520218) | 0.018675 / 0.534201 (-0.515526) | 0.396821 / 0.579283 (-0.182462) | 0.430982 / 0.434364 (-0.003382) | 0.452895 / 0.540337 (-0.087443) | 0.647869 / 1.386936 (-0.739067) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007194 / 0.011353 (-0.004158) | 0.004340 / 0.011008 (-0.006669) | 0.065125 / 0.038508 (0.026617) | 0.096243 / 0.023109 (0.073134) | 0.374361 / 0.275898 (0.098463) | 0.411863 / 0.323480 (0.088383) | 0.005813 / 0.007986 (-0.002172) | 0.003615 / 0.004328 (-0.000713) | 0.064953 / 0.004250 (0.060703) | 0.063171 / 0.037052 (0.026119) | 0.376238 / 0.258489 (0.117749) | 0.415826 / 0.293841 (0.121985) | 0.031926 / 0.128546 (-0.096620) | 0.008821 / 0.075646 (-0.066825) | 0.072150 / 0.419271 (-0.347122) | 0.049484 / 0.043533 (0.005951) | 0.369691 / 0.255139 (0.114552) | 0.390669 / 0.283200 (0.107470) | 0.025732 / 0.141683 (-0.115950) | 1.493833 / 1.452155 (0.041679) | 1.601786 / 1.492716 (0.109070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284279 / 0.018006 (0.266272) | 0.585909 / 0.000490 (0.585419) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033642 / 0.037411 (-0.003769) | 0.095328 / 0.014526 (0.080802) | 0.105810 / 0.176557 (-0.070746) | 0.159779 / 0.737135 (-0.577357) | 0.108938 / 0.296338 (-0.187400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408112 / 0.215209 (0.192902) | 4.067035 / 2.077655 (1.989380) | 2.114504 / 1.504120 (0.610384) | 1.944027 / 1.541195 (0.402832) | 2.066117 / 1.468490 (0.597627) | 0.486441 / 4.584777 (-4.098336) | 3.622659 / 3.745712 (-0.123053) | 3.399310 / 5.269862 (-1.870552) | 2.183151 / 4.565676 (-2.382525) | 0.057490 / 0.424275 (-0.366785) | 0.007955 / 0.007607 (0.000347) | 0.490221 / 0.226044 (0.264177) | 4.887301 / 2.268929 (2.618373) | 2.679806 / 55.444624 (-52.764819) | 2.258992 / 6.876477 (-4.617484) | 2.592493 / 2.142072 (0.450420) | 0.606515 / 4.805227 (-4.198712) | 0.135645 / 6.500664 (-6.365019) | 0.063956 / 0.075469 (-0.011513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331304 / 1.841788 (-0.510483) | 21.458611 / 8.074308 (13.384303) | 14.898964 / 10.191392 (4.707572) | 0.172110 / 0.680424 (-0.508314) | 0.018791 / 0.534201 (-0.515409) | 0.395944 / 0.579283 (-0.183339) | 0.424526 / 0.434364 (-0.009838) | 0.462517 / 0.540337 (-0.077821) | 0.610139 / 1.386936 (-0.776797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005957 / 0.011353 (-0.005396) | 0.003581 / 0.011008 (-0.007427) | 0.079624 / 0.038508 (0.041116) | 0.058004 / 0.023109 (0.034895) | 0.309345 / 0.275898 (0.033447) | 0.346653 / 0.323480 (0.023173) | 0.005420 / 0.007986 (-0.002566) | 0.002906 / 0.004328 (-0.001423) | 0.061970 / 0.004250 (0.057720) | 0.047627 / 0.037052 (0.010575) | 0.314096 / 0.258489 (0.055607) | 0.361368 / 0.293841 (0.067527) | 0.027211 / 0.128546 (-0.101335) | 0.007853 / 0.075646 (-0.067793) | 0.260202 / 0.419271 (-0.159070) | 0.045308 / 0.043533 (0.001775) | 0.312150 / 0.255139 (0.057011) | 0.341085 / 0.283200 (0.057886) | 0.021302 / 0.141683 (-0.120381) | 1.430315 / 1.452155 (-0.021840) | 1.608989 / 1.492716 (0.116273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185289 / 0.018006 (0.167283) | 0.423318 / 0.000490 (0.422828) | 0.005741 / 0.000200 (0.005541) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023777 / 0.037411 (-0.013634) | 0.071937 / 0.014526 (0.057412) | 0.079406 / 0.176557 (-0.097151) | 0.143815 / 0.737135 (-0.593320) | 0.081648 / 0.296338 (-0.214690) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431514 / 0.215209 (0.216305) | 4.314471 / 2.077655 (2.236817) | 2.305167 / 1.504120 (0.801047) | 2.137894 / 1.541195 (0.596699) | 2.161034 / 1.468490 (0.692544) | 0.511701 / 4.584777 (-4.073076) | 3.098213 / 3.745712 (-0.647499) | 4.086837 / 5.269862 (-1.183024) | 2.517184 / 4.565676 (-2.048492) | 0.058272 / 0.424275 (-0.366003) | 0.006415 / 0.007607 (-0.001192) | 0.504792 / 0.226044 (0.278747) | 5.046758 / 2.268929 (2.777829) | 2.752049 / 55.444624 (-52.692576) | 2.407707 / 6.876477 (-4.468770) | 2.532162 / 2.142072 (0.390090) | 0.597562 / 4.805227 (-4.207666) | 0.125935 / 6.500664 (-6.374729) | 0.060837 / 0.075469 (-0.014632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257048 / 1.841788 (-0.584740) | 17.877849 / 8.074308 (9.803541) | 13.904805 / 10.191392 (3.713413) | 0.131647 / 0.680424 (-0.548776) | 0.016975 / 0.534201 (-0.517226) | 0.329651 / 0.579283 (-0.249633) | 0.354358 / 0.434364 (-0.080006) | 0.377545 / 0.540337 (-0.162792) | 0.545593 / 1.386936 (-0.841343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005839 / 0.011353 (-0.005514) | 0.003580 / 0.011008 (-0.007428) | 0.062204 / 0.038508 (0.023696) | 0.057943 / 0.023109 (0.034834) | 0.400165 / 0.275898 (0.124267) | 0.427911 / 0.323480 (0.104431) | 0.004412 / 0.007986 (-0.003574) | 0.002794 / 0.004328 (-0.001534) | 0.062933 / 0.004250 (0.058683) | 0.046243 / 0.037052 (0.009191) | 0.413640 / 0.258489 (0.155151) | 0.418592 / 0.293841 (0.124751) | 0.027020 / 0.128546 (-0.101526) | 0.007927 / 0.075646 (-0.067720) | 0.067581 / 0.419271 (-0.351691) | 0.041927 / 0.043533 (-0.001606) | 0.381863 / 0.255139 (0.126724) | 0.415711 / 0.283200 (0.132511) | 0.019827 / 0.141683 (-0.121856) | 1.464049 / 1.452155 (0.011894) | 1.528387 / 1.492716 (0.035671) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224999 / 0.018006 (0.206993) | 0.419167 / 0.000490 (0.418678) | 0.000363 / 0.000200 (0.000163) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024827 / 0.037411 (-0.012585) | 0.077134 / 0.014526 (0.062608) | 0.085142 / 0.176557 (-0.091414) | 0.137400 / 0.737135 (-0.599735) | 0.086434 / 0.296338 (-0.209905) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452716 / 0.215209 (0.237507) | 4.530610 / 2.077655 (2.452955) | 2.467309 / 1.504120 (0.963189) | 2.300441 / 1.541195 (0.759246) | 2.323475 / 1.468490 (0.854985) | 0.501847 / 4.584777 (-4.082930) | 3.079432 / 3.745712 (-0.666280) | 2.793107 / 5.269862 (-2.476755) | 1.835010 / 4.565676 (-2.730666) | 0.057698 / 0.424275 (-0.366577) | 0.006756 / 0.007607 (-0.000851) | 0.529062 / 0.226044 (0.303017) | 5.287822 / 2.268929 (3.018894) | 2.908411 / 55.444624 (-52.536214) | 2.571627 / 6.876477 (-4.304850) | 2.691188 / 2.142072 (0.549116) | 0.592289 / 4.805227 (-4.212938) | 0.126091 / 6.500664 (-6.374573) | 0.062312 / 0.075469 (-0.013157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328854 / 1.841788 (-0.512933) | 18.185628 / 8.074308 (10.111320) | 13.858781 / 10.191392 (3.667389) | 0.142421 / 0.680424 (-0.538003) | 0.016535 / 0.534201 (-0.517666) | 0.330839 / 0.579283 (-0.248444) | 0.346559 / 0.434364 (-0.087805) | 0.389153 / 0.540337 (-0.151185) | 0.516897 / 1.386936 (-0.870039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n" ]
2023-07-31T06:27:47
2023-07-31T06:48:09
2023-07-31T06:32:58
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6102", "html_url": "https://github.com/huggingface/datasets/pull/6102", "diff_url": "https://github.com/huggingface/datasets/pull/6102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6102.patch", "merged_at": "2023-07-31T06:32:58" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6102/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6101/comments
https://api.github.com/repos/huggingface/datasets/issues/6101/events
https://github.com/huggingface/datasets/pull/6101
1,828,469,648
PR_kwDODunzps5WwspW
6,101
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006543 / 0.011353 (-0.004810) | 0.003894 / 0.011008 (-0.007115) | 0.084742 / 0.038508 (0.046234) | 0.072942 / 0.023109 (0.049833) | 0.310722 / 0.275898 (0.034824) | 0.346806 / 0.323480 (0.023326) | 0.005373 / 0.007986 (-0.002613) | 0.003270 / 0.004328 (-0.001059) | 0.064379 / 0.004250 (0.060128) | 0.054876 / 0.037052 (0.017824) | 0.316794 / 0.258489 (0.058305) | 0.350353 / 0.293841 (0.056512) | 0.030683 / 0.128546 (-0.097863) | 0.008275 / 0.075646 (-0.067371) | 0.288747 / 0.419271 (-0.130525) | 0.051892 / 0.043533 (0.008359) | 0.315060 / 0.255139 (0.059921) | 0.331664 / 0.283200 (0.048464) | 0.023334 / 0.141683 (-0.118349) | 1.499734 / 1.452155 (0.047579) | 1.542006 / 1.492716 (0.049290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210488 / 0.018006 (0.192482) | 0.462187 / 0.000490 (0.461697) | 0.001280 / 0.000200 (0.001080) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027812 / 0.037411 (-0.009599) | 0.082492 / 0.014526 (0.067966) | 0.096504 / 0.176557 (-0.080053) | 0.158164 / 0.737135 (-0.578972) | 0.096678 / 0.296338 (-0.199661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403317 / 0.215209 (0.188108) | 4.008367 / 2.077655 (1.930713) | 2.033067 / 1.504120 (0.528947) | 1.869484 / 1.541195 (0.328290) | 1.947450 / 1.468490 (0.478960) | 0.494048 / 4.584777 (-4.090729) | 3.631673 / 3.745712 (-0.114039) | 5.322167 / 5.269862 (0.052306) | 3.125570 / 4.565676 (-1.440107) | 0.057341 / 0.424275 (-0.366934) | 0.007318 / 0.007607 (-0.000289) | 0.483990 / 0.226044 (0.257945) | 4.830573 / 2.268929 (2.561645) | 2.543267 / 55.444624 (-52.901358) | 2.217890 / 6.876477 (-4.658587) | 2.435111 / 2.142072 (0.293038) | 0.597920 / 4.805227 (-4.207307) | 0.132690 / 6.500664 (-6.367974) | 0.060160 / 0.075469 (-0.015309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247656 / 1.841788 (-0.594131) | 19.436984 / 8.074308 (11.362675) | 14.504249 / 10.191392 (4.312857) | 0.167444 / 0.680424 (-0.512980) | 0.018214 / 0.534201 (-0.515987) | 0.394790 / 0.579283 (-0.184493) | 0.413770 / 0.434364 (-0.020594) | 0.474290 / 0.540337 (-0.066048) | 0.646782 / 1.386936 (-0.740154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006575 / 0.011353 (-0.004778) | 0.003924 / 0.011008 (-0.007084) | 0.064402 / 0.038508 (0.025893) | 0.072569 / 0.023109 (0.049460) | 0.361981 / 0.275898 (0.086083) | 0.398660 / 0.323480 (0.075180) | 0.005380 / 0.007986 (-0.002605) | 0.003355 / 0.004328 (-0.000974) | 0.065173 / 0.004250 (0.060923) | 0.057120 / 0.037052 (0.020067) | 0.366347 / 0.258489 (0.107858) | 0.402723 / 0.293841 (0.108882) | 0.031258 / 0.128546 (-0.097288) | 0.008499 / 0.075646 (-0.067147) | 0.070558 / 0.419271 (-0.348714) | 0.050089 / 0.043533 (0.006556) | 0.361280 / 0.255139 (0.106141) | 0.384497 / 0.283200 (0.101297) | 0.024789 / 0.141683 (-0.116893) | 1.492577 / 1.452155 (0.040422) | 1.572242 / 1.492716 (0.079525) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228054 / 0.018006 (0.210048) | 0.448317 / 0.000490 (0.447828) | 0.000368 / 0.000200 (0.000168) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.088604 / 0.014526 (0.074078) | 0.099317 / 0.176557 (-0.077239) | 0.152455 / 0.737135 (-0.584680) | 0.100444 / 0.296338 (-0.195894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411876 / 0.215209 (0.196667) | 4.108187 / 2.077655 (2.030532) | 2.096371 / 1.504120 (0.592251) | 1.923532 / 1.541195 (0.382337) | 1.998345 / 1.468490 (0.529855) | 0.483853 / 4.584777 (-4.100924) | 3.622433 / 3.745712 (-0.123279) | 3.254430 / 5.269862 (-2.015431) | 2.044342 / 4.565676 (-2.521334) | 0.056756 / 0.424275 (-0.367519) | 0.007720 / 0.007607 (0.000113) | 0.487656 / 0.226044 (0.261612) | 4.882024 / 2.268929 (2.613096) | 2.585008 / 55.444624 (-52.859616) | 2.229251 / 6.876477 (-4.647225) | 2.408318 / 2.142072 (0.266246) | 0.617537 / 4.805227 (-4.187691) | 0.132102 / 6.500664 (-6.368562) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362077 / 1.841788 (-0.479711) | 19.750714 / 8.074308 (11.676406) | 14.545299 / 10.191392 (4.353907) | 0.168666 / 0.680424 (-0.511758) | 0.018606 / 0.534201 (-0.515595) | 0.394760 / 0.579283 (-0.184523) | 0.410030 / 0.434364 (-0.024334) | 0.464742 / 0.540337 (-0.075596) | 0.610881 / 1.386936 (-0.776055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005836 / 0.011353 (-0.005517) | 0.003493 / 0.011008 (-0.007515) | 0.079877 / 0.038508 (0.041369) | 0.057299 / 0.023109 (0.034190) | 0.332945 / 0.275898 (0.057047) | 0.386615 / 0.323480 (0.063135) | 0.004437 / 0.007986 (-0.003548) | 0.002758 / 0.004328 (-0.001571) | 0.062668 / 0.004250 (0.058418) | 0.046135 / 0.037052 (0.009083) | 0.346160 / 0.258489 (0.087671) | 0.416720 / 0.293841 (0.122879) | 0.026678 / 0.128546 (-0.101868) | 0.007893 / 0.075646 (-0.067753) | 0.260427 / 0.419271 (-0.158845) | 0.044240 / 0.043533 (0.000707) | 0.328101 / 0.255139 (0.072963) | 0.380072 / 0.283200 (0.096872) | 0.020813 / 0.141683 (-0.120870) | 1.400202 / 1.452155 (-0.051952) | 1.475627 / 1.492716 (-0.017089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174479 / 0.018006 (0.156473) | 0.413810 / 0.000490 (0.413320) | 0.003059 / 0.000200 (0.002860) | 0.000212 / 0.000054 (0.000157) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023422 / 0.037411 (-0.013990) | 0.071519 / 0.014526 (0.056993) | 0.080555 / 0.176557 (-0.096001) | 0.143825 / 0.737135 (-0.593311) | 0.081182 / 0.296338 (-0.215157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406858 / 0.215209 (0.191648) | 4.161475 / 2.077655 (2.083820) | 1.991800 / 1.504120 (0.487680) | 1.811224 / 1.541195 (0.270030) | 1.828809 / 1.468490 (0.360318) | 0.504882 / 4.584777 (-4.079895) | 2.985010 / 3.745712 (-0.760703) | 3.984856 / 5.269862 (-1.285006) | 2.477936 / 4.565676 (-2.087740) | 0.057553 / 0.424275 (-0.366722) | 0.006436 / 0.007607 (-0.001172) | 0.488061 / 0.226044 (0.262016) | 4.805501 / 2.268929 (2.536573) | 2.446508 / 55.444624 (-52.998116) | 2.051406 / 6.876477 (-4.825071) | 2.177696 / 2.142072 (0.035623) | 0.588021 / 4.805227 (-4.217207) | 0.125118 / 6.500664 (-6.375546) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197130 / 1.841788 (-0.644658) | 17.867450 / 8.074308 (9.793142) | 13.536895 / 10.191392 (3.345503) | 0.137603 / 0.680424 (-0.542821) | 0.016706 / 0.534201 (-0.517495) | 0.327642 / 0.579283 (-0.251641) | 0.347201 / 0.434364 (-0.087163) | 0.379570 / 0.540337 (-0.160768) | 0.517825 / 1.386936 (-0.869111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005769 / 0.011353 (-0.005584) | 0.003414 / 0.011008 (-0.007594) | 0.063198 / 0.038508 (0.024690) | 0.056020 / 0.023109 (0.032911) | 0.393333 / 0.275898 (0.117435) | 0.421166 / 0.323480 (0.097686) | 0.004360 / 0.007986 (-0.003626) | 0.002860 / 0.004328 (-0.001469) | 0.062712 / 0.004250 (0.058461) | 0.045363 / 0.037052 (0.008311) | 0.413156 / 0.258489 (0.154667) | 0.422897 / 0.293841 (0.129056) | 0.027092 / 0.128546 (-0.101455) | 0.007960 / 0.075646 (-0.067687) | 0.068531 / 0.419271 (-0.350740) | 0.041402 / 0.043533 (-0.002131) | 0.377008 / 0.255139 (0.121869) | 0.409142 / 0.283200 (0.125942) | 0.019707 / 0.141683 (-0.121976) | 1.440556 / 1.452155 (-0.011599) | 1.487403 / 1.492716 (-0.005314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224355 / 0.018006 (0.206349) | 0.397855 / 0.000490 (0.397365) | 0.000363 / 0.000200 (0.000163) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025107 / 0.037411 (-0.012305) | 0.076404 / 0.014526 (0.061878) | 0.083194 / 0.176557 (-0.093362) | 0.135347 / 0.737135 (-0.601789) | 0.084786 / 0.296338 (-0.211553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433024 / 0.215209 (0.217815) | 4.323879 / 2.077655 (2.246224) | 2.263004 / 1.504120 (0.758884) | 2.072053 / 1.541195 (0.530858) | 2.113916 / 1.468490 (0.645426) | 0.502742 / 4.584777 (-4.082035) | 3.001716 / 3.745712 (-0.743996) | 2.777960 / 5.269862 (-2.491901) | 1.826514 / 4.565676 (-2.739162) | 0.057735 / 0.424275 (-0.366540) | 0.006671 / 0.007607 (-0.000937) | 0.503347 / 0.226044 (0.277303) | 5.037308 / 2.268929 (2.768380) | 2.679146 / 55.444624 (-52.765478) | 2.410899 / 6.876477 (-4.465577) | 2.467341 / 2.142072 (0.325268) | 0.589824 / 4.805227 (-4.215403) | 0.125529 / 6.500664 (-6.375135) | 0.061950 / 0.075469 (-0.013520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304128 / 1.841788 (-0.537659) | 17.950215 / 8.074308 (9.875907) | 13.673768 / 10.191392 (3.482376) | 0.129863 / 0.680424 (-0.550561) | 0.016720 / 0.534201 (-0.517481) | 0.329795 / 0.579283 (-0.249488) | 0.339057 / 0.434364 (-0.095307) | 0.382279 / 0.540337 (-0.158059) | 0.507337 / 1.386936 (-0.879599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef05b6f99a2b19990c6f5e4e28d95d28781570db \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006199 / 0.011353 (-0.005154) | 0.003749 / 0.011008 (-0.007259) | 0.080600 / 0.038508 (0.042092) | 0.061017 / 0.023109 (0.037908) | 0.319966 / 0.275898 (0.044067) | 0.354937 / 0.323480 (0.031457) | 0.004854 / 0.007986 (-0.003131) | 0.002996 / 0.004328 (-0.001333) | 0.063100 / 0.004250 (0.058849) | 0.050063 / 0.037052 (0.013011) | 0.316744 / 0.258489 (0.058255) | 0.358001 / 0.293841 (0.064160) | 0.027503 / 0.128546 (-0.101043) | 0.007876 / 0.075646 (-0.067771) | 0.262211 / 0.419271 (-0.157060) | 0.045717 / 0.043533 (0.002184) | 0.317188 / 0.255139 (0.062049) | 0.342404 / 0.283200 (0.059205) | 0.020194 / 0.141683 (-0.121489) | 1.498672 / 1.452155 (0.046517) | 1.545479 / 1.492716 (0.052762) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210985 / 0.018006 (0.192979) | 0.433592 / 0.000490 (0.433102) | 0.002864 / 0.000200 (0.002664) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023463 / 0.037411 (-0.013948) | 0.073375 / 0.014526 (0.058850) | 0.083082 / 0.176557 (-0.093475) | 0.142583 / 0.737135 (-0.594552) | 0.084267 / 0.296338 (-0.212071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412890 / 0.215209 (0.197681) | 4.131421 / 2.077655 (2.053766) | 1.969164 / 1.504120 (0.465044) | 1.772379 / 1.541195 (0.231185) | 1.834154 / 1.468490 (0.365664) | 0.496290 / 4.584777 (-4.088487) | 3.056504 / 3.745712 (-0.689208) | 3.400962 / 5.269862 (-1.868900) | 2.120575 / 4.565676 (-2.445101) | 0.056932 / 0.424275 (-0.367343) | 0.006412 / 0.007607 (-0.001195) | 0.484521 / 0.226044 (0.258477) | 4.817474 / 2.268929 (2.548545) | 2.464075 / 55.444624 (-52.980549) | 2.085056 / 6.876477 (-4.791421) | 2.324516 / 2.142072 (0.182444) | 0.592013 / 4.805227 (-4.213214) | 0.132232 / 6.500664 (-6.368432) | 0.062825 / 0.075469 (-0.012645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228080 / 1.841788 (-0.613708) | 18.555385 / 8.074308 (10.481077) | 13.939565 / 10.191392 (3.748173) | 0.145979 / 0.680424 (-0.534445) | 0.016823 / 0.534201 (-0.517377) | 0.330569 / 0.579283 (-0.248714) | 0.358094 / 0.434364 (-0.076270) | 0.384642 / 0.540337 (-0.155696) | 0.518347 / 1.386936 (-0.868589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006198 / 0.011353 (-0.005155) | 0.003670 / 0.011008 (-0.007338) | 0.062502 / 0.038508 (0.023994) | 0.064339 / 0.023109 (0.041229) | 0.428414 / 0.275898 (0.152516) | 0.463899 / 0.323480 (0.140420) | 0.005524 / 0.007986 (-0.002462) | 0.002915 / 0.004328 (-0.001413) | 0.062521 / 0.004250 (0.058270) | 0.051182 / 0.037052 (0.014130) | 0.431144 / 0.258489 (0.172655) | 0.469465 / 0.293841 (0.175624) | 0.027463 / 0.128546 (-0.101083) | 0.007974 / 0.075646 (-0.067673) | 0.068029 / 0.419271 (-0.351242) | 0.042123 / 0.043533 (-0.001409) | 0.428667 / 0.255139 (0.173528) | 0.455917 / 0.283200 (0.172717) | 0.023264 / 0.141683 (-0.118419) | 1.426986 / 1.452155 (-0.025168) | 1.500049 / 1.492716 (0.007332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207264 / 0.018006 (0.189258) | 0.440738 / 0.000490 (0.440248) | 0.000802 / 0.000200 (0.000602) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026245 / 0.037411 (-0.011166) | 0.078749 / 0.014526 (0.064223) | 0.087873 / 0.176557 (-0.088684) | 0.141518 / 0.737135 (-0.595617) | 0.089811 / 0.296338 (-0.206527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418955 / 0.215209 (0.203746) | 4.177881 / 2.077655 (2.100226) | 2.162678 / 1.504120 (0.658558) | 1.998969 / 1.541195 (0.457775) | 2.066720 / 1.468490 (0.598230) | 0.496850 / 4.584777 (-4.087927) | 3.041179 / 3.745712 (-0.704534) | 4.126039 / 5.269862 (-1.143823) | 2.740507 / 4.565676 (-1.825169) | 0.058025 / 0.424275 (-0.366250) | 0.006846 / 0.007607 (-0.000761) | 0.493281 / 0.226044 (0.267237) | 4.930196 / 2.268929 (2.661268) | 2.685152 / 55.444624 (-52.759472) | 2.378247 / 6.876477 (-4.498230) | 2.469103 / 2.142072 (0.327031) | 0.585346 / 4.805227 (-4.219882) | 0.126099 / 6.500664 (-6.374565) | 0.062946 / 0.075469 (-0.012523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313892 / 1.841788 (-0.527896) | 19.177117 / 8.074308 (11.102809) | 14.081321 / 10.191392 (3.889929) | 0.133948 / 0.680424 (-0.546476) | 0.017128 / 0.534201 (-0.517073) | 0.332241 / 0.579283 (-0.247042) | 0.373218 / 0.434364 (-0.061145) | 0.395308 / 0.540337 (-0.145030) | 0.529883 / 1.386936 (-0.857053) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16f7c7677942083436062b904b74643accb9bcac \"CML watermark\")\n" ]
2023-07-31T06:05:36
2023-07-31T06:33:00
2023-07-31T06:18:17
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6101", "html_url": "https://github.com/huggingface/datasets/pull/6101", "diff_url": "https://github.com/huggingface/datasets/pull/6101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6101.patch", "merged_at": "2023-07-31T06:18:17" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6101/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6100/comments
https://api.github.com/repos/huggingface/datasets/issues/6100/events
https://github.com/huggingface/datasets/issues/6100
1,828,118,930
I_kwDODunzps5s9uGS
6,100
TypeError when loading from GCP bucket
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ", "We have fixed it. We are planning to do a patch release today." ]
2023-07-30T23:03:00
2023-08-03T10:00:48
2023-08-01T10:38:55
NONE
null
null
null
### Describe the bug Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1. ### Steps to reproduce the bug Load any file from a GCP bucket: ```python import datasets datasets.load_dataset("json", data_files=["gs://..."]) ``` The following exception is raised: ```python Traceback (most recent call last): ... packages/datasets/data_files.py", line 335, in resolve_pattern protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else "" TypeError: can only concatenate tuple (not "str") to tuple ``` With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string. ### Expected behavior The file should be loaded without exception. ### Environment info - `datasets` version: 2.14.1 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6100/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6099/comments
https://api.github.com/repos/huggingface/datasets/issues/6099/events
https://github.com/huggingface/datasets/issues/6099
1,827,893,576
I_kwDODunzps5s83FI
6,099
How do i get "amazon_us_reviews
{ "login": "IqraBaluch", "id": 57810189, "node_id": "MDQ6VXNlcjU3ODEwMTg5", "avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IqraBaluch", "html_url": "https://github.com/IqraBaluch", "followers_url": "https://api.github.com/users/IqraBaluch/followers", "following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}", "gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}", "starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions", "organizations_url": "https://api.github.com/users/IqraBaluch/orgs", "repos_url": "https://api.github.com/users/IqraBaluch/repos", "events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}", "received_events_url": "https://api.github.com/users/IqraBaluch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\r\ntry:\r\n example = self.info.features.encode_example(record) if self.info.features is not None else record\r\nexcept Exception as e:\r\n print(record)\r\n```\r\n\r\n⬇️\r\n\r\n```\r\n{'<?xml version=\"1.0\" encoding=\"UTF-8\"?>': '<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>N2HFJ82ZV8SZW9BV</RequestId><HostId>Zw2DQ0V2GdRmvH5qWEpumK4uj5+W8YPcilQbN9fLBr3VqQOcKPHOhUZLG3LcM9X5fkOetxp48Os=</HostId></Error>'}\r\n```", "I'm getting same errors when loading this dataset", "I have figured it out. there was an option of **parquet formated files** i downloaded some from there. ", "this dataset is unfortunately no longer public", "Thanks for reporting, @IqraBaluch.\r\n\r\nWe contacted the authors and unfortunately they reported that Amazon has decided to stop distributing this dataset.", "If anyone still needs this dataset, you could find it on kaggle here : https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset", "Thanks @Maryam-Mostafa ", "@albertvillanova don't tell 'em, we have figured it out. XD" ]
2023-07-30T11:02:17
2023-08-10T05:02:36
2023-08-10T05:02:35
NONE
null
null
null
### Feature request I have been trying to load 'amazon_us_dataset" but unable to do so. `amazon_us_reviews = load_dataset('amazon_us_reviews')` `print(amazon_us_reviews)` > [ValueError: Config name is missing. Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02'] Example of usage: `load_dataset('amazon_us_reviews', 'Wireless_v1_00')`] __________________________________________________________________________ `amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00') print(amazon_us_reviews)` **ERROR** `Generating` train split: 0% 0/960872 [00:00<?, ? examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1692 ) -> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record 1694 writer.write(example, key) 11 frames KeyError: 'marketplace' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1711 e = e.__context__ -> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1713 1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ### Motivation The dataset I'm using https://huggingface.co/datasets/amazon_us_reviews ### Your contribution What is the best way to load this data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6099/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6098/comments
https://api.github.com/repos/huggingface/datasets/issues/6098/events
https://github.com/huggingface/datasets/pull/6098
1,827,655,071
PR_kwDODunzps5WuCn1
6,098
Expanduser in save_to_disk()
{ "login": "Unknown3141592", "id": 51715864, "node_id": "MDQ6VXNlcjUxNzE1ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/51715864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Unknown3141592", "html_url": "https://github.com/Unknown3141592", "followers_url": "https://api.github.com/users/Unknown3141592/followers", "following_url": "https://api.github.com/users/Unknown3141592/following{/other_user}", "gists_url": "https://api.github.com/users/Unknown3141592/gists{/gist_id}", "starred_url": "https://api.github.com/users/Unknown3141592/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Unknown3141592/subscriptions", "organizations_url": "https://api.github.com/users/Unknown3141592/orgs", "repos_url": "https://api.github.com/users/Unknown3141592/repos", "events_url": "https://api.github.com/users/Unknown3141592/events{/privacy}", "received_events_url": "https://api.github.com/users/Unknown3141592/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-29T20:50:45
2023-07-29T20:58:57
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6098", "html_url": "https://github.com/huggingface/datasets/pull/6098", "diff_url": "https://github.com/huggingface/datasets/pull/6098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6098.patch", "merged_at": null }
Fixes #5651. The same problem occurs when loading from disk so I fixed it there too. I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6098/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6097/comments
https://api.github.com/repos/huggingface/datasets/issues/6097/events
https://github.com/huggingface/datasets/issues/6097
1,827,054,143
I_kwDODunzps5s5qI_
6,097
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
{ "login": "aschoenauer-sebag", "id": 2538048, "node_id": "MDQ6VXNlcjI1MzgwNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aschoenauer-sebag", "html_url": "https://github.com/aschoenauer-sebag", "followers_url": "https://api.github.com/users/aschoenauer-sebag/followers", "following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}", "gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}", "starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions", "organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs", "repos_url": "https://api.github.com/users/aschoenauer-sebag/repos", "events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}", "received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it." ]
2023-07-28T20:31:59
2023-07-28T20:49:58
2023-07-28T20:49:58
NONE
null
null
null
### Describe the bug Hi team! I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc. Are you able to reproduce what I observe? ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature: ``` {'vectors': array([[random values ...]])} ``` ### Expected behavior The expected behavior happens when the `set_format` method is not called: ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) # foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere ``` {'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']} ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6097/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6096/comments
https://api.github.com/repos/huggingface/datasets/issues/6096/events
https://github.com/huggingface/datasets/pull/6096
1,826,731,091
PR_kwDODunzps5Wq9Hb
6,096
Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6096). All of your documentation changes will be reflected on that endpoint." ]
2023-07-28T16:36:59
2023-07-31T13:12:52
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6096", "html_url": "https://github.com/huggingface/datasets/pull/6096", "diff_url": "https://github.com/huggingface/datasets/pull/6096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6096.patch", "merged_at": null }
Hi to whoever is reading this! 🤗 (Most likely @mariosasko) ## What's in this PR? This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as requested at #6086. ## What's missing in this PR? As per `to_json`, `to_csv`, and `to_parquet` docstrings for the recently included `storage_options` arg, I've scoped it to 2.15.0, so we should check that before merging in case we want to scope that for 2.14.2 instead. Additionally, should we also add `fsspec` support for the `from_csv`, `from_json`, and `from_parquet` methods? If you want me to do so @mariosasko just let me know and I'll create another PR to support that too!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6096/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6095/comments
https://api.github.com/repos/huggingface/datasets/issues/6095/events
https://github.com/huggingface/datasets/pull/6095
1,826,496,967
PR_kwDODunzps5WqJtr
6,095
Fix deprecation of errors in TextConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012497 / 0.011353 (0.001144) | 0.005355 / 0.011008 (-0.005654) | 0.106018 / 0.038508 (0.067510) | 0.093069 / 0.023109 (0.069960) | 0.394699 / 0.275898 (0.118801) | 0.449723 / 0.323480 (0.126243) | 0.006434 / 0.007986 (-0.001552) | 0.004187 / 0.004328 (-0.000141) | 0.079620 / 0.004250 (0.075370) | 0.062513 / 0.037052 (0.025460) | 0.410305 / 0.258489 (0.151816) | 0.467231 / 0.293841 (0.173390) | 0.048130 / 0.128546 (-0.080416) | 0.013747 / 0.075646 (-0.061899) | 0.357979 / 0.419271 (-0.061293) | 0.064764 / 0.043533 (0.021231) | 0.411029 / 0.255139 (0.155890) | 0.454734 / 0.283200 (0.171534) | 0.037215 / 0.141683 (-0.104468) | 1.801331 / 1.452155 (0.349176) | 1.951628 / 1.492716 (0.458912) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231073 / 0.018006 (0.213067) | 0.564179 / 0.000490 (0.563689) | 0.000947 / 0.000200 (0.000747) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030629 / 0.037411 (-0.006783) | 0.092522 / 0.014526 (0.077996) | 0.109781 / 0.176557 (-0.066775) | 0.183185 / 0.737135 (-0.553950) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.600095 / 0.215209 (0.384886) | 6.072868 / 2.077655 (3.995213) | 2.684109 / 1.504120 (1.179989) | 2.436204 / 1.541195 (0.895010) | 2.514667 / 1.468490 (1.046177) | 0.865455 / 4.584777 (-3.719322) | 5.245561 / 3.745712 (1.499849) | 5.628688 / 5.269862 (0.358826) | 3.457343 / 4.565676 (-1.108333) | 0.107563 / 0.424275 (-0.316712) | 0.008803 / 0.007607 (0.001196) | 0.754014 / 0.226044 (0.527970) | 7.341226 / 2.268929 (5.072297) | 3.482090 / 55.444624 (-51.962534) | 2.726071 / 6.876477 (-4.150406) | 3.168494 / 2.142072 (1.026422) | 1.023517 / 4.805227 (-3.781710) | 0.207440 / 6.500664 (-6.293224) | 0.073642 / 0.075469 (-0.001827) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.588636 / 1.841788 (-0.253152) | 23.305257 / 8.074308 (15.230949) | 22.071476 / 10.191392 (11.880084) | 0.242044 / 0.680424 (-0.438379) | 0.028830 / 0.534201 (-0.505371) | 0.461414 / 0.579283 (-0.117869) | 0.591024 / 0.434364 (0.156660) | 0.548984 / 0.540337 (0.008646) | 0.783318 / 1.386936 (-0.603618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008724 / 0.011353 (-0.002629) | 0.004638 / 0.011008 (-0.006371) | 0.081024 / 0.038508 (0.042516) | 0.077533 / 0.023109 (0.054423) | 0.444827 / 0.275898 (0.168929) | 0.507812 / 0.323480 (0.184332) | 0.006017 / 0.007986 (-0.001968) | 0.004204 / 0.004328 (-0.000124) | 0.082154 / 0.004250 (0.077904) | 0.063818 / 0.037052 (0.026765) | 0.463468 / 0.258489 (0.204979) | 0.536784 / 0.293841 (0.242943) | 0.046393 / 0.128546 (-0.082153) | 0.014349 / 0.075646 (-0.061298) | 0.089213 / 0.419271 (-0.330059) | 0.058313 / 0.043533 (0.014780) | 0.463674 / 0.255139 (0.208535) | 0.495865 / 0.283200 (0.212665) | 0.036586 / 0.141683 (-0.105096) | 1.801601 / 1.452155 (0.349447) | 1.871219 / 1.492716 (0.378502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273411 / 0.018006 (0.255405) | 0.531745 / 0.000490 (0.531255) | 0.000424 / 0.000200 (0.000224) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037689 / 0.037411 (0.000278) | 0.109544 / 0.014526 (0.095019) | 0.124053 / 0.176557 (-0.052504) | 0.179960 / 0.737135 (-0.557175) | 0.118218 / 0.296338 (-0.178120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639859 / 0.215209 (0.424650) | 6.347385 / 2.077655 (4.269730) | 2.910188 / 1.504120 (1.406068) | 2.698821 / 1.541195 (1.157626) | 2.802652 / 1.468490 (1.334161) | 0.816109 / 4.584777 (-3.768668) | 5.190313 / 3.745712 (1.444601) | 4.642684 / 5.269862 (-0.627178) | 2.948092 / 4.565676 (-1.617584) | 0.095877 / 0.424275 (-0.328398) | 0.009631 / 0.007607 (0.002024) | 0.779136 / 0.226044 (0.553091) | 7.611586 / 2.268929 (5.342658) | 3.760804 / 55.444624 (-51.683820) | 3.139355 / 6.876477 (-3.737122) | 3.419660 / 2.142072 (1.277587) | 1.036397 / 4.805227 (-3.768831) | 0.224015 / 6.500664 (-6.276649) | 0.084037 / 0.075469 (0.008568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.710608 / 1.841788 (-0.131179) | 24.447646 / 8.074308 (16.373338) | 21.345322 / 10.191392 (11.153930) | 0.232383 / 0.680424 (-0.448040) | 0.026381 / 0.534201 (-0.507820) | 0.475995 / 0.579283 (-0.103289) | 0.611939 / 0.434364 (0.177575) | 0.541441 / 0.540337 (0.001104) | 0.742796 / 1.386936 (-0.644140) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7929929525e734f7232cfc68d1d22fb8d53c54a3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006140 / 0.011353 (-0.005213) | 0.003664 / 0.011008 (-0.007344) | 0.080765 / 0.038508 (0.042257) | 0.065009 / 0.023109 (0.041900) | 0.312787 / 0.275898 (0.036889) | 0.354637 / 0.323480 (0.031157) | 0.004846 / 0.007986 (-0.003140) | 0.003019 / 0.004328 (-0.001310) | 0.062823 / 0.004250 (0.058573) | 0.050446 / 0.037052 (0.013394) | 0.314478 / 0.258489 (0.055989) | 0.360206 / 0.293841 (0.066365) | 0.027282 / 0.128546 (-0.101265) | 0.008024 / 0.075646 (-0.067622) | 0.262125 / 0.419271 (-0.157146) | 0.045793 / 0.043533 (0.002260) | 0.310508 / 0.255139 (0.055369) | 0.340899 / 0.283200 (0.057699) | 0.021850 / 0.141683 (-0.119833) | 1.510791 / 1.452155 (0.058636) | 1.570661 / 1.492716 (0.077944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.449310 / 0.000490 (0.448820) | 0.004556 / 0.000200 (0.004356) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023689 / 0.037411 (-0.013722) | 0.076316 / 0.014526 (0.061791) | 0.084800 / 0.176557 (-0.091757) | 0.153154 / 0.737135 (-0.583981) | 0.086467 / 0.296338 (-0.209871) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432254 / 0.215209 (0.217045) | 4.305098 / 2.077655 (2.227443) | 2.304267 / 1.504120 (0.800147) | 2.139503 / 1.541195 (0.598309) | 2.220414 / 1.468490 (0.751924) | 0.498595 / 4.584777 (-4.086182) | 3.058593 / 3.745712 (-0.687119) | 4.324501 / 5.269862 (-0.945361) | 2.667731 / 4.565676 (-1.897946) | 0.059917 / 0.424275 (-0.364358) | 0.006829 / 0.007607 (-0.000778) | 0.504608 / 0.226044 (0.278564) | 5.044480 / 2.268929 (2.775552) | 2.753080 / 55.444624 (-52.691545) | 2.449265 / 6.876477 (-4.427212) | 2.635113 / 2.142072 (0.493040) | 0.590760 / 4.805227 (-4.214467) | 0.130133 / 6.500664 (-6.370532) | 0.062759 / 0.075469 (-0.012710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267014 / 1.841788 (-0.574773) | 18.562890 / 8.074308 (10.488581) | 13.991257 / 10.191392 (3.799865) | 0.147108 / 0.680424 (-0.533315) | 0.017216 / 0.534201 (-0.516985) | 0.330317 / 0.579283 (-0.248966) | 0.351328 / 0.434364 (-0.083036) | 0.381097 / 0.540337 (-0.159241) | 0.558718 / 1.386936 (-0.828218) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006385 / 0.011353 (-0.004967) | 0.003668 / 0.011008 (-0.007340) | 0.062581 / 0.038508 (0.024073) | 0.067006 / 0.023109 (0.043896) | 0.428465 / 0.275898 (0.152567) | 0.466106 / 0.323480 (0.142626) | 0.005806 / 0.007986 (-0.002180) | 0.003117 / 0.004328 (-0.001212) | 0.063554 / 0.004250 (0.059303) | 0.054404 / 0.037052 (0.017352) | 0.431168 / 0.258489 (0.172679) | 0.467578 / 0.293841 (0.173737) | 0.027779 / 0.128546 (-0.100767) | 0.008055 / 0.075646 (-0.067592) | 0.067718 / 0.419271 (-0.351554) | 0.043042 / 0.043533 (-0.000491) | 0.425926 / 0.255139 (0.170787) | 0.453699 / 0.283200 (0.170500) | 0.023495 / 0.141683 (-0.118187) | 1.435356 / 1.452155 (-0.016799) | 1.509340 / 1.492716 (0.016624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242322 / 0.018006 (0.224316) | 0.446865 / 0.000490 (0.446376) | 0.001079 / 0.000200 (0.000879) | 0.000065 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025376 / 0.037411 (-0.012035) | 0.079373 / 0.014526 (0.064847) | 0.088554 / 0.176557 (-0.088002) | 0.141026 / 0.737135 (-0.596109) | 0.090666 / 0.296338 (-0.205672) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434800 / 0.215209 (0.219590) | 4.314491 / 2.077655 (2.236836) | 2.320688 / 1.504120 (0.816568) | 2.163941 / 1.541195 (0.622747) | 2.292576 / 1.468490 (0.824086) | 0.500226 / 4.584777 (-4.084551) | 3.114604 / 3.745712 (-0.631108) | 4.206997 / 5.269862 (-1.062864) | 2.461126 / 4.565676 (-2.104551) | 0.057717 / 0.424275 (-0.366558) | 0.006989 / 0.007607 (-0.000618) | 0.515623 / 0.226044 (0.289579) | 5.155301 / 2.268929 (2.886372) | 2.733589 / 55.444624 (-52.711035) | 2.542111 / 6.876477 (-4.334366) | 2.697035 / 2.142072 (0.554963) | 0.594213 / 4.805227 (-4.211014) | 0.128537 / 6.500664 (-6.372127) | 0.065223 / 0.075469 (-0.010246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306738 / 1.841788 (-0.535050) | 19.065370 / 8.074308 (10.991062) | 14.242096 / 10.191392 (4.050704) | 0.146177 / 0.680424 (-0.534246) | 0.017186 / 0.534201 (-0.517015) | 0.337224 / 0.579283 (-0.242059) | 0.349997 / 0.434364 (-0.084367) | 0.390408 / 0.540337 (-0.149930) | 0.524597 / 1.386936 (-0.862339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69ec36948b0ef1f194e9dcd43ec53a50b7708962 \"CML watermark\")\n" ]
2023-07-28T14:08:37
2023-07-31T05:26:32
2023-07-31T05:17:38
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6095", "html_url": "https://github.com/huggingface/datasets/pull/6095", "diff_url": "https://github.com/huggingface/datasets/pull/6095.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6095.patch", "merged_at": "2023-07-31T05:17:38" }
This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by: - #5974 ```python In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict") --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-701c27131a5d> in <module> ----> 1 ds = load_dataset("text", data_files="test.txt", errors="strict") ~/huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, ~/huggingface/datasets/src/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1830 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1831 # Instantiate the dataset builder -> 1832 builder_instance: DatasetBuilder = builder_cls( 1833 cache_dir=cache_dir, 1834 dataset_name=dataset_name, ~/huggingface/datasets/src/datasets/builder.py in __init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs) 371 if data_dir is not None: 372 config_kwargs["data_dir"] = data_dir --> 373 self.config, self.config_id = self._create_builder_config( 374 config_name=config_name, 375 custom_features=features, ~/huggingface/datasets/src/datasets/builder.py in _create_builder_config(self, config_name, custom_features, **config_kwargs) 550 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 551 config_kwargs["version"] = self.VERSION --> 552 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 553 554 # otherwise use the config_kwargs to overwrite the attributes TypeError: __init__() got an unexpected keyword argument 'errors' ``` Similar to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6095/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6094/comments
https://api.github.com/repos/huggingface/datasets/issues/6094/events
https://github.com/huggingface/datasets/pull/6094
1,826,293,414
PR_kwDODunzps5WpdpA
6,094
Fix deprecation of use_auth_token in DownloadConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008996 / 0.011353 (-0.002357) | 0.004976 / 0.011008 (-0.006033) | 0.114495 / 0.038508 (0.075987) | 0.083958 / 0.023109 (0.060849) | 0.408395 / 0.275898 (0.132497) | 0.456757 / 0.323480 (0.133278) | 0.006396 / 0.007986 (-0.001589) | 0.004315 / 0.004328 (-0.000014) | 0.093558 / 0.004250 (0.089307) | 0.062067 / 0.037052 (0.025014) | 0.423452 / 0.258489 (0.164963) | 0.463947 / 0.293841 (0.170106) | 0.049934 / 0.128546 (-0.078613) | 0.013937 / 0.075646 (-0.061709) | 0.365809 / 0.419271 (-0.053463) | 0.067382 / 0.043533 (0.023849) | 0.418860 / 0.255139 (0.163721) | 0.463264 / 0.283200 (0.180065) | 0.034392 / 0.141683 (-0.107291) | 1.870685 / 1.452155 (0.418530) | 1.975313 / 1.492716 (0.482597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261748 / 0.018006 (0.243742) | 0.645510 / 0.000490 (0.645020) | 0.000376 / 0.000200 (0.000176) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032129 / 0.037411 (-0.005282) | 0.104309 / 0.014526 (0.089783) | 0.113154 / 0.176557 (-0.063403) | 0.186795 / 0.737135 (-0.550341) | 0.115584 / 0.296338 (-0.180755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577755 / 0.215209 (0.362546) | 5.984988 / 2.077655 (3.907333) | 2.581967 / 1.504120 (1.077848) | 2.305744 / 1.541195 (0.764549) | 2.359618 / 1.468490 (0.891128) | 0.882892 / 4.584777 (-3.701885) | 5.755578 / 3.745712 (2.009866) | 8.718373 / 5.269862 (3.448511) | 5.217586 / 4.565676 (0.651909) | 0.099785 / 0.424275 (-0.324490) | 0.009008 / 0.007607 (0.001401) | 0.730937 / 0.226044 (0.504892) | 7.265309 / 2.268929 (4.996381) | 3.487167 / 55.444624 (-51.957457) | 2.750090 / 6.876477 (-4.126386) | 3.060198 / 2.142072 (0.918125) | 1.069945 / 4.805227 (-3.735282) | 0.227143 / 6.500664 (-6.273521) | 0.083601 / 0.075469 (0.008132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.754375 / 1.841788 (-0.087412) | 25.448731 / 8.074308 (17.374423) | 22.385943 / 10.191392 (12.194551) | 0.249921 / 0.680424 (-0.430503) | 0.034138 / 0.534201 (-0.500063) | 0.535170 / 0.579283 (-0.044113) | 0.605474 / 0.434364 (0.171110) | 0.580025 / 0.540337 (0.039688) | 0.810537 / 1.386936 (-0.576399) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005029 / 0.011008 (-0.005979) | 0.082200 / 0.038508 (0.043691) | 0.082386 / 0.023109 (0.059277) | 0.491869 / 0.275898 (0.215971) | 0.546735 / 0.323480 (0.223255) | 0.006893 / 0.007986 (-0.001093) | 0.004571 / 0.004328 (0.000243) | 0.085361 / 0.004250 (0.081111) | 0.063342 / 0.037052 (0.026290) | 0.522522 / 0.258489 (0.264033) | 0.560784 / 0.293841 (0.266943) | 0.047685 / 0.128546 (-0.080861) | 0.017741 / 0.075646 (-0.057905) | 0.098204 / 0.419271 (-0.321067) | 0.062919 / 0.043533 (0.019386) | 0.504005 / 0.255139 (0.248866) | 0.547022 / 0.283200 (0.263823) | 0.033731 / 0.141683 (-0.107952) | 1.869765 / 1.452155 (0.417610) | 1.935867 / 1.492716 (0.443151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304756 / 0.018006 (0.286750) | 0.623647 / 0.000490 (0.623157) | 0.000508 / 0.000200 (0.000308) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.043627 / 0.037411 (0.006216) | 0.107183 / 0.014526 (0.092657) | 0.119304 / 0.176557 (-0.057253) | 0.192651 / 0.737135 (-0.544485) | 0.125118 / 0.296338 (-0.171221) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669980 / 0.215209 (0.454771) | 6.566068 / 2.077655 (4.488413) | 3.136271 / 1.504120 (1.632152) | 2.964643 / 1.541195 (1.423448) | 2.936772 / 1.468490 (1.468282) | 0.885205 / 4.584777 (-3.699572) | 5.539062 / 3.745712 (1.793350) | 5.006133 / 5.269862 (-0.263729) | 3.313697 / 4.565676 (-1.251979) | 0.102975 / 0.424275 (-0.321301) | 0.010759 / 0.007607 (0.003152) | 0.791176 / 0.226044 (0.565132) | 7.822195 / 2.268929 (5.553266) | 3.982315 / 55.444624 (-51.462309) | 3.357026 / 6.876477 (-3.519451) | 3.561307 / 2.142072 (1.419234) | 1.056966 / 4.805227 (-3.748261) | 0.220476 / 6.500664 (-6.280188) | 0.090535 / 0.075469 (0.015066) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.897984 / 1.841788 (0.056196) | 26.411411 / 8.074308 (18.337103) | 22.951939 / 10.191392 (12.760547) | 0.216091 / 0.680424 (-0.464333) | 0.037005 / 0.534201 (-0.497196) | 0.505585 / 0.579283 (-0.073698) | 0.617794 / 0.434364 (0.183430) | 0.604631 / 0.540337 (0.064293) | 0.826356 / 1.386936 (-0.560580) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca6342c0177adc3a1d114740444e207b8525ed6e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006850 / 0.011353 (-0.004503) | 0.004062 / 0.011008 (-0.006947) | 0.086587 / 0.038508 (0.048079) | 0.079587 / 0.023109 (0.056478) | 0.353601 / 0.275898 (0.077702) | 0.396399 / 0.323480 (0.072919) | 0.004182 / 0.007986 (-0.003804) | 0.004445 / 0.004328 (0.000117) | 0.065100 / 0.004250 (0.060849) | 0.057386 / 0.037052 (0.020334) | 0.356945 / 0.258489 (0.098456) | 0.407093 / 0.293841 (0.113252) | 0.031949 / 0.128546 (-0.096597) | 0.008525 / 0.075646 (-0.067121) | 0.291310 / 0.419271 (-0.127961) | 0.053638 / 0.043533 (0.010105) | 0.359381 / 0.255139 (0.104242) | 0.399473 / 0.283200 (0.116273) | 0.025880 / 0.141683 (-0.115803) | 1.487604 / 1.452155 (0.035449) | 1.550528 / 1.492716 (0.057812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201106 / 0.018006 (0.183099) | 0.457538 / 0.000490 (0.457048) | 0.003995 / 0.000200 (0.003795) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030365 / 0.037411 (-0.007046) | 0.088064 / 0.014526 (0.073538) | 0.096432 / 0.176557 (-0.080124) | 0.158063 / 0.737135 (-0.579072) | 0.098258 / 0.296338 (-0.198080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405351 / 0.215209 (0.190142) | 4.032639 / 2.077655 (1.954984) | 2.018357 / 1.504120 (0.514237) | 1.848493 / 1.541195 (0.307298) | 1.929401 / 1.468490 (0.460910) | 0.488729 / 4.584777 (-4.096048) | 3.586114 / 3.745712 (-0.159598) | 5.279054 / 5.269862 (0.009193) | 3.113275 / 4.565676 (-1.452402) | 0.057373 / 0.424275 (-0.366902) | 0.007416 / 0.007607 (-0.000191) | 0.485514 / 0.226044 (0.259470) | 4.854389 / 2.268929 (2.585461) | 2.493113 / 55.444624 (-52.951512) | 2.128836 / 6.876477 (-4.747641) | 2.383669 / 2.142072 (0.241596) | 0.588266 / 4.805227 (-4.216962) | 0.133603 / 6.500664 (-6.367061) | 0.061812 / 0.075469 (-0.013657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260841 / 1.841788 (-0.580947) | 20.086954 / 8.074308 (12.012646) | 14.620932 / 10.191392 (4.429540) | 0.161525 / 0.680424 (-0.518899) | 0.018102 / 0.534201 (-0.516099) | 0.393810 / 0.579283 (-0.185473) | 0.406974 / 0.434364 (-0.027390) | 0.462732 / 0.540337 (-0.077606) | 0.634221 / 1.386936 (-0.752715) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004068 / 0.011008 (-0.006940) | 0.068009 / 0.038508 (0.029501) | 0.081298 / 0.023109 (0.058189) | 0.363531 / 0.275898 (0.087633) | 0.408482 / 0.323480 (0.085002) | 0.005601 / 0.007986 (-0.002384) | 0.003385 / 0.004328 (-0.000943) | 0.068043 / 0.004250 (0.063792) | 0.059739 / 0.037052 (0.022687) | 0.374043 / 0.258489 (0.115553) | 0.407219 / 0.293841 (0.113378) | 0.031194 / 0.128546 (-0.097352) | 0.008630 / 0.075646 (-0.067017) | 0.073755 / 0.419271 (-0.345517) | 0.049831 / 0.043533 (0.006298) | 0.363664 / 0.255139 (0.108525) | 0.381515 / 0.283200 (0.098315) | 0.026331 / 0.141683 (-0.115352) | 1.507771 / 1.452155 (0.055617) | 1.554403 / 1.492716 (0.061686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226309 / 0.018006 (0.208302) | 0.452428 / 0.000490 (0.451938) | 0.000937 / 0.000200 (0.000737) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031899 / 0.037411 (-0.005513) | 0.092090 / 0.014526 (0.077564) | 0.100838 / 0.176557 (-0.075718) | 0.153722 / 0.737135 (-0.583413) | 0.101950 / 0.296338 (-0.194389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417879 / 0.215209 (0.202669) | 4.171939 / 2.077655 (2.094284) | 2.312937 / 1.504120 (0.808817) | 2.209991 / 1.541195 (0.668796) | 2.329469 / 1.468490 (0.860979) | 0.484576 / 4.584777 (-4.100201) | 3.659198 / 3.745712 (-0.086514) | 5.255227 / 5.269862 (-0.014634) | 3.047430 / 4.565676 (-1.518247) | 0.057029 / 0.424275 (-0.367246) | 0.007735 / 0.007607 (0.000127) | 0.499962 / 0.226044 (0.273918) | 4.991655 / 2.268929 (2.722727) | 2.755999 / 55.444624 (-52.688625) | 2.374034 / 6.876477 (-4.502443) | 2.599759 / 2.142072 (0.457687) | 0.600319 / 4.805227 (-4.204908) | 0.146176 / 6.500664 (-6.354488) | 0.062328 / 0.075469 (-0.013142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346065 / 1.841788 (-0.495722) | 20.430343 / 8.074308 (12.356035) | 14.632959 / 10.191392 (4.441567) | 0.167007 / 0.680424 (-0.513417) | 0.018588 / 0.534201 (-0.515613) | 0.396015 / 0.579283 (-0.183268) | 0.429384 / 0.434364 (-0.004980) | 0.467746 / 0.540337 (-0.072591) | 0.615166 / 1.386936 (-0.771770) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#289bcc2ae9bf98c9414b6846ae603178a1816d3f \"CML watermark\")\n" ]
2023-07-28T11:52:21
2023-07-31T05:08:41
2023-07-31T04:59:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6094", "html_url": "https://github.com/huggingface/datasets/pull/6094", "diff_url": "https://github.com/huggingface/datasets/pull/6094.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6094.patch", "merged_at": "2023-07-31T04:59:50" }
This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by: - #5996 ```python In [1]: from datasets import DownloadConfig In [2]: DownloadConfig(use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-41927b449e72> in <module> ----> 1 DownloadConfig(use_auth_token=False) TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ``` ```python In [1]: from datasets import get_dataset_config_names In [2]: get_dataset_config_names("squad", use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-4671992ead50> in <module> ----> 1 get_dataset_config_names("squad", use_auth_token=False) ~/huggingface/datasets/src/datasets/inspect.py in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 349 ``` 350 """ --> 351 dataset_module = dataset_module_factory( 352 path, 353 revision=revision, ~/huggingface/datasets/src/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1374 """ 1375 if download_config is None: -> 1376 download_config = DownloadConfig(**download_kwargs) 1377 download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) 1378 download_config.extract_compressed_file = True TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6094/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6093/comments
https://api.github.com/repos/huggingface/datasets/issues/6093/events
https://github.com/huggingface/datasets/pull/6093
1,826,210,490
PR_kwDODunzps5WpLfh
6,093
Deprecate `download_custom`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007498 / 0.011353 (-0.003855) | 0.004158 / 0.011008 (-0.006850) | 0.087568 / 0.038508 (0.049060) | 0.083265 / 0.023109 (0.060156) | 0.378505 / 0.275898 (0.102607) | 0.399025 / 0.323480 (0.075545) | 0.006173 / 0.007986 (-0.001813) | 0.003743 / 0.004328 (-0.000586) | 0.071958 / 0.004250 (0.067707) | 0.059323 / 0.037052 (0.022271) | 0.377084 / 0.258489 (0.118595) | 0.408358 / 0.293841 (0.114517) | 0.035191 / 0.128546 (-0.093356) | 0.009408 / 0.075646 (-0.066238) | 0.312587 / 0.419271 (-0.106685) | 0.058073 / 0.043533 (0.014540) | 0.381977 / 0.255139 (0.126838) | 0.395611 / 0.283200 (0.112411) | 0.024191 / 0.141683 (-0.117491) | 1.572735 / 1.452155 (0.120581) | 1.687186 / 1.492716 (0.194470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208886 / 0.018006 (0.190879) | 0.474625 / 0.000490 (0.474135) | 0.006261 / 0.000200 (0.006061) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031401 / 0.037411 (-0.006011) | 0.086433 / 0.014526 (0.071907) | 0.108405 / 0.176557 (-0.068152) | 0.174564 / 0.737135 (-0.562571) | 0.099932 / 0.296338 (-0.196407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407059 / 0.215209 (0.191850) | 4.102056 / 2.077655 (2.024401) | 1.975397 / 1.504120 (0.471277) | 1.807117 / 1.541195 (0.265922) | 1.908667 / 1.468490 (0.440177) | 0.525880 / 4.584777 (-4.058897) | 3.899639 / 3.745712 (0.153927) | 4.358664 / 5.269862 (-0.911198) | 2.586185 / 4.565676 (-1.979492) | 0.061967 / 0.424275 (-0.362308) | 0.007656 / 0.007607 (0.000049) | 0.504851 / 0.226044 (0.278807) | 5.004429 / 2.268929 (2.735500) | 2.515540 / 55.444624 (-52.929084) | 2.183142 / 6.876477 (-4.693334) | 2.369835 / 2.142072 (0.227763) | 0.623527 / 4.805227 (-4.181700) | 0.145105 / 6.500664 (-6.355559) | 0.063924 / 0.075469 (-0.011546) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472661 / 1.841788 (-0.369126) | 21.781655 / 8.074308 (13.707347) | 15.628820 / 10.191392 (5.437428) | 0.182342 / 0.680424 (-0.498082) | 0.021139 / 0.534201 (-0.513062) | 0.438610 / 0.579283 (-0.140673) | 0.451343 / 0.434364 (0.016979) | 0.563320 / 0.540337 (0.022983) | 0.740976 / 1.386936 (-0.645960) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007492 / 0.011353 (-0.003861) | 0.004429 / 0.011008 (-0.006579) | 0.068517 / 0.038508 (0.030008) | 0.078533 / 0.023109 (0.055424) | 0.383530 / 0.275898 (0.107632) | 0.435061 / 0.323480 (0.111581) | 0.005955 / 0.007986 (-0.002030) | 0.003645 / 0.004328 (-0.000683) | 0.068792 / 0.004250 (0.064541) | 0.062452 / 0.037052 (0.025399) | 0.408768 / 0.258489 (0.150279) | 0.438538 / 0.293841 (0.144697) | 0.032038 / 0.128546 (-0.096508) | 0.009196 / 0.075646 (-0.066450) | 0.074495 / 0.419271 (-0.344776) | 0.051322 / 0.043533 (0.007789) | 0.394458 / 0.255139 (0.139319) | 0.424763 / 0.283200 (0.141564) | 0.024890 / 0.141683 (-0.116793) | 1.568322 / 1.452155 (0.116167) | 1.703903 / 1.492716 (0.211187) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249630 / 0.018006 (0.231624) | 0.471412 / 0.000490 (0.470923) | 0.000435 / 0.000200 (0.000235) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033054 / 0.037411 (-0.004358) | 0.100150 / 0.014526 (0.085624) | 0.101704 / 0.176557 (-0.074853) | 0.164031 / 0.737135 (-0.573104) | 0.112497 / 0.296338 (-0.183841) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487150 / 0.215209 (0.271941) | 4.662335 / 2.077655 (2.584681) | 2.477285 / 1.504120 (0.973165) | 2.294033 / 1.541195 (0.752838) | 2.380143 / 1.468490 (0.911653) | 0.519182 / 4.584777 (-4.065595) | 3.983589 / 3.745712 (0.237877) | 3.669895 / 5.269862 (-1.599967) | 2.267147 / 4.565676 (-2.298529) | 0.063300 / 0.424275 (-0.360975) | 0.008839 / 0.007607 (0.001232) | 0.566766 / 0.226044 (0.340721) | 5.533475 / 2.268929 (3.264546) | 3.033412 / 55.444624 (-52.411212) | 2.701793 / 6.876477 (-4.174684) | 2.899444 / 2.142072 (0.757372) | 0.614236 / 4.805227 (-4.190991) | 0.139533 / 6.500664 (-6.361131) | 0.067537 / 0.075469 (-0.007932) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505572 / 1.841788 (-0.336216) | 22.859062 / 8.074308 (14.784754) | 15.044777 / 10.191392 (4.853385) | 0.169153 / 0.680424 (-0.511271) | 0.021027 / 0.534201 (-0.513174) | 0.447979 / 0.579283 (-0.131304) | 0.460676 / 0.434364 (0.026312) | 0.506327 / 0.540337 (-0.034010) | 0.737880 / 1.386936 (-0.649057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db7180eb7e3ebf52b9d1f2c6629db6d92d8a29ba \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003692 / 0.011008 (-0.007316) | 0.080606 / 0.038508 (0.042098) | 0.062014 / 0.023109 (0.038905) | 0.391886 / 0.275898 (0.115988) | 0.423978 / 0.323480 (0.100498) | 0.004968 / 0.007986 (-0.003017) | 0.002911 / 0.004328 (-0.001417) | 0.062867 / 0.004250 (0.058617) | 0.049493 / 0.037052 (0.012441) | 0.395656 / 0.258489 (0.137167) | 0.432406 / 0.293841 (0.138565) | 0.027242 / 0.128546 (-0.101304) | 0.007938 / 0.075646 (-0.067709) | 0.261703 / 0.419271 (-0.157569) | 0.045922 / 0.043533 (0.002389) | 0.391544 / 0.255139 (0.136405) | 0.417902 / 0.283200 (0.134703) | 0.021339 / 0.141683 (-0.120344) | 1.508391 / 1.452155 (0.056236) | 1.518970 / 1.492716 (0.026254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181159 / 0.018006 (0.163153) | 0.431402 / 0.000490 (0.430912) | 0.003849 / 0.000200 (0.003649) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024498 / 0.037411 (-0.012914) | 0.072758 / 0.014526 (0.058233) | 0.084910 / 0.176557 (-0.091646) | 0.148314 / 0.737135 (-0.588821) | 0.085212 / 0.296338 (-0.211126) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386693 / 0.215209 (0.171484) | 3.852652 / 2.077655 (1.774997) | 1.891758 / 1.504120 (0.387638) | 1.718793 / 1.541195 (0.177598) | 1.747595 / 1.468490 (0.279104) | 0.498593 / 4.584777 (-4.086184) | 3.057907 / 3.745712 (-0.687805) | 4.728449 / 5.269862 (-0.541413) | 2.966368 / 4.565676 (-1.599308) | 0.057538 / 0.424275 (-0.366737) | 0.006415 / 0.007607 (-0.001192) | 0.461652 / 0.226044 (0.235608) | 4.625944 / 2.268929 (2.357015) | 2.306938 / 55.444624 (-53.137686) | 1.974670 / 6.876477 (-4.901806) | 2.146327 / 2.142072 (0.004254) | 0.585033 / 4.805227 (-4.220195) | 0.125936 / 6.500664 (-6.374728) | 0.062365 / 0.075469 (-0.013104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263415 / 1.841788 (-0.578373) | 18.380651 / 8.074308 (10.306343) | 13.853410 / 10.191392 (3.662018) | 0.144674 / 0.680424 (-0.535749) | 0.016833 / 0.534201 (-0.517368) | 0.330812 / 0.579283 (-0.248471) | 0.357553 / 0.434364 (-0.076810) | 0.383529 / 0.540337 (-0.156809) | 0.558923 / 1.386936 (-0.828013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006074 / 0.011353 (-0.005278) | 0.003655 / 0.011008 (-0.007353) | 0.062981 / 0.038508 (0.024473) | 0.061457 / 0.023109 (0.038348) | 0.366471 / 0.275898 (0.090573) | 0.408463 / 0.323480 (0.084983) | 0.004854 / 0.007986 (-0.003132) | 0.002916 / 0.004328 (-0.001412) | 0.062745 / 0.004250 (0.058494) | 0.051136 / 0.037052 (0.014084) | 0.380313 / 0.258489 (0.121824) | 0.416945 / 0.293841 (0.123104) | 0.027228 / 0.128546 (-0.101318) | 0.008031 / 0.075646 (-0.067615) | 0.067941 / 0.419271 (-0.351331) | 0.042886 / 0.043533 (-0.000647) | 0.370112 / 0.255139 (0.114973) | 0.397111 / 0.283200 (0.113911) | 0.023063 / 0.141683 (-0.118620) | 1.476955 / 1.452155 (0.024800) | 1.534783 / 1.492716 (0.042066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231462 / 0.018006 (0.213456) | 0.439559 / 0.000490 (0.439069) | 0.000364 / 0.000200 (0.000164) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026925 / 0.037411 (-0.010486) | 0.079623 / 0.014526 (0.065097) | 0.088694 / 0.176557 (-0.087862) | 0.143163 / 0.737135 (-0.593972) | 0.089900 / 0.296338 (-0.206438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451429 / 0.215209 (0.236220) | 4.510723 / 2.077655 (2.433069) | 2.491853 / 1.504120 (0.987733) | 2.334670 / 1.541195 (0.793475) | 2.395519 / 1.468490 (0.927029) | 0.501369 / 4.584777 (-4.083408) | 3.014019 / 3.745712 (-0.731693) | 2.809199 / 5.269862 (-2.460662) | 1.842195 / 4.565676 (-2.723481) | 0.057675 / 0.424275 (-0.366600) | 0.006742 / 0.007607 (-0.000865) | 0.524402 / 0.226044 (0.298358) | 5.245296 / 2.268929 (2.976367) | 2.957990 / 55.444624 (-52.486634) | 2.649807 / 6.876477 (-4.226670) | 2.755909 / 2.142072 (0.613836) | 0.589610 / 4.805227 (-4.215617) | 0.125708 / 6.500664 (-6.374956) | 0.062237 / 0.075469 (-0.013232) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362758 / 1.841788 (-0.479030) | 18.343694 / 8.074308 (10.269386) | 13.621521 / 10.191392 (3.430129) | 0.128866 / 0.680424 (-0.551558) | 0.016608 / 0.534201 (-0.517593) | 0.333071 / 0.579283 (-0.246212) | 0.341917 / 0.434364 (-0.092447) | 0.381075 / 0.540337 (-0.159263) | 0.512485 / 1.386936 (-0.874451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab3f0165d4a2a8ab1aee1ebc4628893e17e27387 \"CML watermark\")\n", "I forgot to mention this in the initial comment, but only one public dataset (excluding gated) uses this method - `pg19`, which I just fixed.\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.004791 / 0.011008 (-0.006217) | 0.102596 / 0.038508 (0.064088) | 0.087678 / 0.023109 (0.064569) | 0.373858 / 0.275898 (0.097960) | 0.416643 / 0.323480 (0.093163) | 0.006147 / 0.007986 (-0.001839) | 0.003837 / 0.004328 (-0.000491) | 0.076706 / 0.004250 (0.072456) | 0.063449 / 0.037052 (0.026396) | 0.378392 / 0.258489 (0.119903) | 0.431768 / 0.293841 (0.137927) | 0.036648 / 0.128546 (-0.091898) | 0.010042 / 0.075646 (-0.065604) | 0.350277 / 0.419271 (-0.068995) | 0.062892 / 0.043533 (0.019359) | 0.376151 / 0.255139 (0.121012) | 0.420929 / 0.283200 (0.137729) | 0.027816 / 0.141683 (-0.113867) | 1.791607 / 1.452155 (0.339452) | 1.903045 / 1.492716 (0.410328) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224688 / 0.018006 (0.206682) | 0.491941 / 0.000490 (0.491451) | 0.004482 / 0.000200 (0.004282) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033495 / 0.037411 (-0.003917) | 0.099855 / 0.014526 (0.085329) | 0.114593 / 0.176557 (-0.061964) | 0.190947 / 0.737135 (-0.546189) | 0.116202 / 0.296338 (-0.180136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488581 / 0.215209 (0.273372) | 4.869531 / 2.077655 (2.791876) | 2.527920 / 1.504120 (1.023800) | 2.340021 / 1.541195 (0.798826) | 2.432661 / 1.468490 (0.964171) | 0.569646 / 4.584777 (-4.015131) | 4.392036 / 3.745712 (0.646324) | 4.987253 / 5.269862 (-0.282608) | 2.866604 / 4.565676 (-1.699073) | 0.067393 / 0.424275 (-0.356882) | 0.008759 / 0.007607 (0.001152) | 0.584327 / 0.226044 (0.358283) | 5.853000 / 2.268929 (3.584072) | 3.206721 / 55.444624 (-52.237904) | 2.730867 / 6.876477 (-4.145610) | 2.944814 / 2.142072 (0.802742) | 0.703336 / 4.805227 (-4.101891) | 0.173985 / 6.500664 (-6.326679) | 0.075333 / 0.075469 (-0.000137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.519755 / 1.841788 (-0.322033) | 22.918038 / 8.074308 (14.843730) | 17.211160 / 10.191392 (7.019768) | 0.196941 / 0.680424 (-0.483483) | 0.021833 / 0.534201 (-0.512368) | 0.476835 / 0.579283 (-0.102448) | 0.464513 / 0.434364 (0.030149) | 0.559180 / 0.540337 (0.018843) | 0.748232 / 1.386936 (-0.638704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008461 / 0.011353 (-0.002892) | 0.004799 / 0.011008 (-0.006209) | 0.077466 / 0.038508 (0.038958) | 0.103562 / 0.023109 (0.080453) | 0.453661 / 0.275898 (0.177763) | 0.531126 / 0.323480 (0.207647) | 0.006618 / 0.007986 (-0.001367) | 0.004048 / 0.004328 (-0.000280) | 0.075446 / 0.004250 (0.071196) | 0.072815 / 0.037052 (0.035762) | 0.497145 / 0.258489 (0.238656) | 0.533828 / 0.293841 (0.239987) | 0.037657 / 0.128546 (-0.090890) | 0.010139 / 0.075646 (-0.065507) | 0.083759 / 0.419271 (-0.335512) | 0.061401 / 0.043533 (0.017868) | 0.441785 / 0.255139 (0.186646) | 0.491678 / 0.283200 (0.208479) | 0.033100 / 0.141683 (-0.108583) | 1.753612 / 1.452155 (0.301458) | 1.838956 / 1.492716 (0.346240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.395023 / 0.018006 (0.377017) | 0.509362 / 0.000490 (0.508872) | 0.060742 / 0.000200 (0.060542) | 0.000545 / 0.000054 (0.000491) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039327 / 0.037411 (0.001916) | 0.117345 / 0.014526 (0.102819) | 0.124540 / 0.176557 (-0.052017) | 0.200743 / 0.737135 (-0.536392) | 0.126750 / 0.296338 (-0.169589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488597 / 0.215209 (0.273388) | 4.875534 / 2.077655 (2.797880) | 2.714364 / 1.504120 (1.210244) | 2.603707 / 1.541195 (1.062513) | 2.733547 / 1.468490 (1.265057) | 0.575183 / 4.584777 (-4.009594) | 4.126096 / 3.745712 (0.380384) | 3.853803 / 5.269862 (-1.416058) | 2.395160 / 4.565676 (-2.170516) | 0.067391 / 0.424275 (-0.356884) | 0.009108 / 0.007607 (0.001501) | 0.585865 / 0.226044 (0.359820) | 5.864878 / 2.268929 (3.595949) | 3.153369 / 55.444624 (-52.291256) | 2.759064 / 6.876477 (-4.117413) | 3.032489 / 2.142072 (0.890416) | 0.702615 / 4.805227 (-4.102613) | 0.160034 / 6.500664 (-6.340630) | 0.077294 / 0.075469 (0.001825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595069 / 1.841788 (-0.246719) | 23.231191 / 8.074308 (15.156883) | 16.365137 / 10.191392 (6.173745) | 0.188360 / 0.680424 (-0.492063) | 0.021704 / 0.534201 (-0.512497) | 0.469996 / 0.579283 (-0.109287) | 0.463255 / 0.434364 (0.028891) | 0.560506 / 0.540337 (0.020169) | 0.751006 / 1.386936 (-0.635930) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#50d9a70c666ff46ff9974c47cedc77d9f88d6471 \"CML watermark\")\n" ]
2023-07-28T10:49:06
2023-07-28T11:40:37
2023-07-28T11:30:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6093", "html_url": "https://github.com/huggingface/datasets/pull/6093", "diff_url": "https://github.com/huggingface/datasets/pull/6093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6093.patch", "merged_at": "2023-07-28T11:30:02" }
Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead. We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been requests to implement the streaming version of this method on the forum, but the reason for this seems to be a tip in the docs that "promotes" this method (this PR removes it).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6093/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6092/comments
https://api.github.com/repos/huggingface/datasets/issues/6092/events
https://github.com/huggingface/datasets/pull/6092
1,826,111,806
PR_kwDODunzps5Wo1mh
6,092
Minor fix in `iter_files` for hidden files
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007873 / 0.011353 (-0.003480) | 0.004585 / 0.011008 (-0.006423) | 0.101622 / 0.038508 (0.063114) | 0.092459 / 0.023109 (0.069350) | 0.365157 / 0.275898 (0.089259) | 0.405943 / 0.323480 (0.082463) | 0.006229 / 0.007986 (-0.001756) | 0.003811 / 0.004328 (-0.000518) | 0.073831 / 0.004250 (0.069580) | 0.065097 / 0.037052 (0.028045) | 0.378912 / 0.258489 (0.120423) | 0.422174 / 0.293841 (0.128333) | 0.036244 / 0.128546 (-0.092302) | 0.009677 / 0.075646 (-0.065970) | 0.345164 / 0.419271 (-0.074107) | 0.061632 / 0.043533 (0.018099) | 0.370350 / 0.255139 (0.115211) | 0.418245 / 0.283200 (0.135046) | 0.027272 / 0.141683 (-0.114411) | 1.774047 / 1.452155 (0.321892) | 1.880278 / 1.492716 (0.387562) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217238 / 0.018006 (0.199231) | 0.489560 / 0.000490 (0.489071) | 0.004013 / 0.000200 (0.003813) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034139 / 0.037411 (-0.003272) | 0.103831 / 0.014526 (0.089305) | 0.114353 / 0.176557 (-0.062204) | 0.182034 / 0.737135 (-0.555102) | 0.116171 / 0.296338 (-0.180168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448658 / 0.215209 (0.233449) | 4.520849 / 2.077655 (2.443195) | 2.216121 / 1.504120 (0.712001) | 2.034596 / 1.541195 (0.493402) | 2.193216 / 1.468490 (0.724725) | 0.568166 / 4.584777 (-4.016611) | 4.133587 / 3.745712 (0.387875) | 4.641117 / 5.269862 (-0.628744) | 2.772913 / 4.565676 (-1.792764) | 0.067664 / 0.424275 (-0.356611) | 0.008719 / 0.007607 (0.001112) | 0.547723 / 0.226044 (0.321678) | 5.438325 / 2.268929 (3.169397) | 2.877667 / 55.444624 (-52.566958) | 2.477503 / 6.876477 (-4.398974) | 2.688209 / 2.142072 (0.546136) | 0.692593 / 4.805227 (-4.112634) | 0.154549 / 6.500664 (-6.346115) | 0.073286 / 0.075469 (-0.002183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610927 / 1.841788 (-0.230861) | 23.413345 / 8.074308 (15.339037) | 16.851819 / 10.191392 (6.660427) | 0.170076 / 0.680424 (-0.510348) | 0.021428 / 0.534201 (-0.512773) | 0.468184 / 0.579283 (-0.111099) | 0.491820 / 0.434364 (0.057456) | 0.553453 / 0.540337 (0.013115) | 0.762303 / 1.386936 (-0.624633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008033 / 0.011353 (-0.003320) | 0.004638 / 0.011008 (-0.006370) | 0.077044 / 0.038508 (0.038536) | 0.096529 / 0.023109 (0.073420) | 0.428735 / 0.275898 (0.152837) | 0.477303 / 0.323480 (0.153823) | 0.006040 / 0.007986 (-0.001946) | 0.003808 / 0.004328 (-0.000521) | 0.076042 / 0.004250 (0.071791) | 0.066123 / 0.037052 (0.029071) | 0.445482 / 0.258489 (0.186993) | 0.481350 / 0.293841 (0.187509) | 0.036951 / 0.128546 (-0.091595) | 0.009944 / 0.075646 (-0.065703) | 0.082731 / 0.419271 (-0.336541) | 0.057490 / 0.043533 (0.013958) | 0.432668 / 0.255139 (0.177529) | 0.461146 / 0.283200 (0.177947) | 0.027330 / 0.141683 (-0.114353) | 1.784195 / 1.452155 (0.332040) | 1.834776 / 1.492716 (0.342059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254104 / 0.018006 (0.236097) | 0.475810 / 0.000490 (0.475321) | 0.000459 / 0.000200 (0.000259) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037058 / 0.037411 (-0.000353) | 0.114962 / 0.014526 (0.100436) | 0.123725 / 0.176557 (-0.052832) | 0.188885 / 0.737135 (-0.548251) | 0.125668 / 0.296338 (-0.170670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492627 / 0.215209 (0.277418) | 4.900625 / 2.077655 (2.822970) | 2.546349 / 1.504120 (1.042229) | 2.360350 / 1.541195 (0.819155) | 2.477975 / 1.468490 (1.009485) | 0.574042 / 4.584777 (-4.010735) | 4.408414 / 3.745712 (0.662702) | 3.836640 / 5.269862 (-1.433222) | 2.438450 / 4.565676 (-2.127227) | 0.067706 / 0.424275 (-0.356569) | 0.009165 / 0.007607 (0.001558) | 0.580313 / 0.226044 (0.354269) | 5.798211 / 2.268929 (3.529283) | 3.098480 / 55.444624 (-52.346145) | 2.740180 / 6.876477 (-4.136296) | 2.984548 / 2.142072 (0.842476) | 0.702550 / 4.805227 (-4.102677) | 0.158248 / 6.500664 (-6.342416) | 0.073999 / 0.075469 (-0.001470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.636034 / 1.841788 (-0.205754) | 24.068000 / 8.074308 (15.993692) | 17.123987 / 10.191392 (6.932595) | 0.210101 / 0.680424 (-0.470323) | 0.022555 / 0.534201 (-0.511646) | 0.509354 / 0.579283 (-0.069929) | 0.540739 / 0.434364 (0.106375) | 0.546048 / 0.540337 (0.005711) | 0.719155 / 1.386936 (-0.667781) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40530382ba98f54445de8820943b1236d4a4704f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007342 / 0.011353 (-0.004010) | 0.004579 / 0.011008 (-0.006429) | 0.087050 / 0.038508 (0.048542) | 0.089001 / 0.023109 (0.065892) | 0.307319 / 0.275898 (0.031421) | 0.377573 / 0.323480 (0.054093) | 0.006472 / 0.007986 (-0.001514) | 0.004287 / 0.004328 (-0.000041) | 0.067226 / 0.004250 (0.062976) | 0.063147 / 0.037052 (0.026094) | 0.314541 / 0.258489 (0.056052) | 0.369919 / 0.293841 (0.076078) | 0.031283 / 0.128546 (-0.097263) | 0.009175 / 0.075646 (-0.066471) | 0.289211 / 0.419271 (-0.130061) | 0.053444 / 0.043533 (0.009911) | 0.307308 / 0.255139 (0.052169) | 0.346221 / 0.283200 (0.063021) | 0.027948 / 0.141683 (-0.113735) | 1.475177 / 1.452155 (0.023022) | 1.575971 / 1.492716 (0.083255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291092 / 0.018006 (0.273086) | 0.696951 / 0.000490 (0.696461) | 0.005211 / 0.000200 (0.005011) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031787 / 0.037411 (-0.005625) | 0.084382 / 0.014526 (0.069857) | 0.106474 / 0.176557 (-0.070083) | 0.161472 / 0.737135 (-0.575663) | 0.108650 / 0.296338 (-0.187688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379656 / 0.215209 (0.164447) | 3.784072 / 2.077655 (1.706417) | 1.826580 / 1.504120 (0.322460) | 1.654916 / 1.541195 (0.113721) | 1.730698 / 1.468490 (0.262208) | 0.478003 / 4.584777 (-4.106774) | 3.564920 / 3.745712 (-0.180792) | 5.824873 / 5.269862 (0.555012) | 3.454563 / 4.565676 (-1.111113) | 0.056646 / 0.424275 (-0.367629) | 0.007410 / 0.007607 (-0.000197) | 0.461781 / 0.226044 (0.235737) | 4.600928 / 2.268929 (2.331999) | 2.351887 / 55.444624 (-53.092738) | 1.986470 / 6.876477 (-4.890007) | 2.311623 / 2.142072 (0.169551) | 0.571247 / 4.805227 (-4.233980) | 0.132191 / 6.500664 (-6.368473) | 0.059943 / 0.075469 (-0.015526) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253142 / 1.841788 (-0.588646) | 21.294983 / 8.074308 (13.220675) | 14.522429 / 10.191392 (4.331037) | 0.166663 / 0.680424 (-0.513761) | 0.019694 / 0.534201 (-0.514507) | 0.395908 / 0.579283 (-0.183375) | 0.413283 / 0.434364 (-0.021081) | 0.457739 / 0.540337 (-0.082599) | 0.664361 / 1.386936 (-0.722575) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007228 / 0.011353 (-0.004124) | 0.004941 / 0.011008 (-0.006067) | 0.065381 / 0.038508 (0.026873) | 0.090790 / 0.023109 (0.067681) | 0.391315 / 0.275898 (0.115417) | 0.416518 / 0.323480 (0.093038) | 0.007015 / 0.007986 (-0.000970) | 0.004417 / 0.004328 (0.000089) | 0.067235 / 0.004250 (0.062985) | 0.068092 / 0.037052 (0.031039) | 0.403031 / 0.258489 (0.144542) | 0.434013 / 0.293841 (0.140172) | 0.032004 / 0.128546 (-0.096542) | 0.009242 / 0.075646 (-0.066404) | 0.071222 / 0.419271 (-0.348050) | 0.054207 / 0.043533 (0.010674) | 0.386198 / 0.255139 (0.131059) | 0.404350 / 0.283200 (0.121150) | 0.036284 / 0.141683 (-0.105399) | 1.488814 / 1.452155 (0.036660) | 1.587785 / 1.492716 (0.095069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313760 / 0.018006 (0.295754) | 0.747778 / 0.000490 (0.747289) | 0.003307 / 0.000200 (0.003107) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034321 / 0.037411 (-0.003090) | 0.088266 / 0.014526 (0.073740) | 0.112874 / 0.176557 (-0.063682) | 0.171554 / 0.737135 (-0.565581) | 0.111356 / 0.296338 (-0.184982) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422624 / 0.215209 (0.207415) | 4.212079 / 2.077655 (2.134425) | 2.242742 / 1.504120 (0.738622) | 2.072555 / 1.541195 (0.531360) | 2.192648 / 1.468490 (0.724158) | 0.488214 / 4.584777 (-4.096563) | 3.597013 / 3.745712 (-0.148699) | 3.477556 / 5.269862 (-1.792305) | 2.184340 / 4.565676 (-2.381337) | 0.057170 / 0.424275 (-0.367105) | 0.007772 / 0.007607 (0.000165) | 0.499455 / 0.226044 (0.273411) | 4.988953 / 2.268929 (2.720024) | 2.797894 / 55.444624 (-52.646731) | 2.402215 / 6.876477 (-4.474262) | 2.725069 / 2.142072 (0.582997) | 0.596213 / 4.805227 (-4.209014) | 0.136564 / 6.500664 (-6.364100) | 0.061799 / 0.075469 (-0.013670) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.360739 / 1.841788 (-0.481049) | 21.846457 / 8.074308 (13.772149) | 14.568842 / 10.191392 (4.377450) | 0.168980 / 0.680424 (-0.511444) | 0.018795 / 0.534201 (-0.515406) | 0.396173 / 0.579283 (-0.183110) | 0.418651 / 0.434364 (-0.015713) | 0.480042 / 0.540337 (-0.060295) | 0.650803 / 1.386936 (-0.736133) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7d460304487d4daab0a64ca0ca707e896367ca1 \"CML watermark\")\n" ]
2023-07-28T09:50:12
2023-07-28T10:59:28
2023-07-28T10:50:10
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6092", "html_url": "https://github.com/huggingface/datasets/pull/6092", "diff_url": "https://github.com/huggingface/datasets/pull/6092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6092.patch", "merged_at": "2023-07-28T10:50:09" }
Fix #6090
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6092/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6091/comments
https://api.github.com/repos/huggingface/datasets/issues/6091/events
https://github.com/huggingface/datasets/pull/6091
1,826,086,487
PR_kwDODunzps5Wov9Q
6,091
Bump fsspec from 2021.11.1 to 2022.3.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004077 / 0.011008 (-0.006931) | 0.084905 / 0.038508 (0.046397) | 0.074004 / 0.023109 (0.050895) | 0.315968 / 0.275898 (0.040070) | 0.351594 / 0.323480 (0.028114) | 0.005623 / 0.007986 (-0.002362) | 0.003476 / 0.004328 (-0.000852) | 0.065089 / 0.004250 (0.060839) | 0.054683 / 0.037052 (0.017631) | 0.314983 / 0.258489 (0.056494) | 0.371776 / 0.293841 (0.077935) | 0.031727 / 0.128546 (-0.096819) | 0.008786 / 0.075646 (-0.066860) | 0.289905 / 0.419271 (-0.129367) | 0.053340 / 0.043533 (0.009807) | 0.311802 / 0.255139 (0.056663) | 0.351927 / 0.283200 (0.068727) | 0.024453 / 0.141683 (-0.117229) | 1.491727 / 1.452155 (0.039572) | 1.585027 / 1.492716 (0.092310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238757 / 0.018006 (0.220750) | 0.557691 / 0.000490 (0.557202) | 0.005158 / 0.000200 (0.004958) | 0.000204 / 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028435 / 0.037411 (-0.008977) | 0.082219 / 0.014526 (0.067693) | 0.096932 / 0.176557 (-0.079625) | 0.153802 / 0.737135 (-0.583333) | 0.098338 / 0.296338 (-0.198001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383448 / 0.215209 (0.168238) | 3.816074 / 2.077655 (1.738420) | 1.835111 / 1.504120 (0.330991) | 1.662326 / 1.541195 (0.121131) | 1.720202 / 1.468490 (0.251712) | 0.483107 / 4.584777 (-4.101669) | 3.648528 / 3.745712 (-0.097184) | 4.020929 / 5.269862 (-1.248932) | 2.433141 / 4.565676 (-2.132536) | 0.057081 / 0.424275 (-0.367194) | 0.007303 / 0.007607 (-0.000304) | 0.461366 / 0.226044 (0.235322) | 4.609090 / 2.268929 (2.340162) | 2.355940 / 55.444624 (-53.088684) | 1.989833 / 6.876477 (-4.886644) | 2.201451 / 2.142072 (0.059378) | 0.586156 / 4.805227 (-4.219071) | 0.133486 / 6.500664 (-6.367178) | 0.060062 / 0.075469 (-0.015407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247845 / 1.841788 (-0.593942) | 19.624252 / 8.074308 (11.549944) | 14.305975 / 10.191392 (4.114583) | 0.168687 / 0.680424 (-0.511737) | 0.018075 / 0.534201 (-0.516126) | 0.393859 / 0.579283 (-0.185424) | 0.407272 / 0.434364 (-0.027092) | 0.463760 / 0.540337 (-0.076578) | 0.629930 / 1.386936 (-0.757006) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006760 / 0.011353 (-0.004593) | 0.004345 / 0.011008 (-0.006663) | 0.064379 / 0.038508 (0.025871) | 0.078295 / 0.023109 (0.055186) | 0.364532 / 0.275898 (0.088633) | 0.395852 / 0.323480 (0.072372) | 0.005659 / 0.007986 (-0.002327) | 0.003515 / 0.004328 (-0.000813) | 0.065030 / 0.004250 (0.060780) | 0.059950 / 0.037052 (0.022898) | 0.375420 / 0.258489 (0.116931) | 0.411579 / 0.293841 (0.117738) | 0.031575 / 0.128546 (-0.096972) | 0.008737 / 0.075646 (-0.066910) | 0.070350 / 0.419271 (-0.348922) | 0.050607 / 0.043533 (0.007075) | 0.359785 / 0.255139 (0.104646) | 0.382638 / 0.283200 (0.099438) | 0.025533 / 0.141683 (-0.116150) | 1.564379 / 1.452155 (0.112225) | 1.620642 / 1.492716 (0.127925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212779 / 0.018006 (0.194773) | 0.563827 / 0.000490 (0.563337) | 0.003767 / 0.000200 (0.003567) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030275 / 0.037411 (-0.007136) | 0.088108 / 0.014526 (0.073582) | 0.102454 / 0.176557 (-0.074103) | 0.156107 / 0.737135 (-0.581028) | 0.103961 / 0.296338 (-0.192378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421395 / 0.215209 (0.206186) | 4.204935 / 2.077655 (2.127280) | 2.144929 / 1.504120 (0.640809) | 1.999341 / 1.541195 (0.458147) | 2.066966 / 1.468490 (0.598476) | 0.486135 / 4.584777 (-4.098642) | 3.628139 / 3.745712 (-0.117573) | 5.652683 / 5.269862 (0.382821) | 3.216721 / 4.565676 (-1.348956) | 0.057513 / 0.424275 (-0.366762) | 0.007553 / 0.007607 (-0.000055) | 0.494470 / 0.226044 (0.268426) | 4.949343 / 2.268929 (2.680414) | 2.654222 / 55.444624 (-52.790402) | 2.322257 / 6.876477 (-4.554220) | 2.555633 / 2.142072 (0.413561) | 0.588355 / 4.805227 (-4.216872) | 0.134481 / 6.500664 (-6.366183) | 0.062415 / 0.075469 (-0.013054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377578 / 1.841788 (-0.464209) | 19.805201 / 8.074308 (11.730893) | 14.128536 / 10.191392 (3.937144) | 0.164343 / 0.680424 (-0.516081) | 0.018553 / 0.534201 (-0.515648) | 0.398191 / 0.579283 (-0.181093) | 0.414268 / 0.434364 (-0.020096) | 0.462270 / 0.540337 (-0.078068) | 0.608497 / 1.386936 (-0.778439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3af05ba487f361fae90a4c80af72de5c4ed70162 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006966 / 0.011353 (-0.004387) | 0.004339 / 0.011008 (-0.006669) | 0.086682 / 0.038508 (0.048174) | 0.086143 / 0.023109 (0.063034) | 0.316106 / 0.275898 (0.040208) | 0.351422 / 0.323480 (0.027942) | 0.005916 / 0.007986 (-0.002069) | 0.003630 / 0.004328 (-0.000698) | 0.066980 / 0.004250 (0.062730) | 0.060031 / 0.037052 (0.022979) | 0.317487 / 0.258489 (0.058998) | 0.356280 / 0.293841 (0.062439) | 0.031816 / 0.128546 (-0.096730) | 0.008797 / 0.075646 (-0.066849) | 0.289848 / 0.419271 (-0.129424) | 0.055431 / 0.043533 (0.011898) | 0.318881 / 0.255139 (0.063742) | 0.332315 / 0.283200 (0.049116) | 0.025946 / 0.141683 (-0.115737) | 1.472904 / 1.452155 (0.020749) | 1.577973 / 1.492716 (0.085257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239056 / 0.018006 (0.221050) | 0.565406 / 0.000490 (0.564917) | 0.003606 / 0.000200 (0.003406) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029771 / 0.037411 (-0.007640) | 0.085534 / 0.014526 (0.071008) | 0.107008 / 0.176557 (-0.069548) | 0.631583 / 0.737135 (-0.105552) | 0.104210 / 0.296338 (-0.192128) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390675 / 0.215209 (0.175466) | 3.898746 / 2.077655 (1.821091) | 1.933048 / 1.504120 (0.428928) | 1.792162 / 1.541195 (0.250967) | 1.958045 / 1.468490 (0.489555) | 0.488632 / 4.584777 (-4.096144) | 3.696306 / 3.745712 (-0.049406) | 3.454600 / 5.269862 (-1.815262) | 2.176292 / 4.565676 (-2.389385) | 0.057617 / 0.424275 (-0.366658) | 0.007603 / 0.007607 (-0.000004) | 0.467843 / 0.226044 (0.241798) | 4.672928 / 2.268929 (2.404000) | 2.441096 / 55.444624 (-53.003529) | 2.133506 / 6.876477 (-4.742970) | 2.431167 / 2.142072 (0.289095) | 0.588567 / 4.805227 (-4.216661) | 0.136070 / 6.500664 (-6.364594) | 0.063395 / 0.075469 (-0.012074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255003 / 1.841788 (-0.586784) | 20.587656 / 8.074308 (12.513348) | 15.147817 / 10.191392 (4.956425) | 0.152039 / 0.680424 (-0.528384) | 0.018815 / 0.534201 (-0.515386) | 0.397458 / 0.579283 (-0.181825) | 0.431433 / 0.434364 (-0.002931) | 0.487890 / 0.540337 (-0.052448) | 0.675367 / 1.386936 (-0.711569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007209 / 0.011353 (-0.004144) | 0.004372 / 0.011008 (-0.006636) | 0.066288 / 0.038508 (0.027780) | 0.091776 / 0.023109 (0.068667) | 0.390724 / 0.275898 (0.114826) | 0.434711 / 0.323480 (0.111231) | 0.005790 / 0.007986 (-0.002196) | 0.003562 / 0.004328 (-0.000767) | 0.066155 / 0.004250 (0.061904) | 0.062459 / 0.037052 (0.025406) | 0.406622 / 0.258489 (0.148133) | 0.433976 / 0.293841 (0.140135) | 0.032590 / 0.128546 (-0.095957) | 0.008856 / 0.075646 (-0.066790) | 0.072327 / 0.419271 (-0.346945) | 0.049958 / 0.043533 (0.006426) | 0.400164 / 0.255139 (0.145025) | 0.413339 / 0.283200 (0.130139) | 0.025283 / 0.141683 (-0.116399) | 1.487668 / 1.452155 (0.035514) | 1.537679 / 1.492716 (0.044962) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257814 / 0.018006 (0.239808) | 0.571741 / 0.000490 (0.571251) | 0.000412 / 0.000200 (0.000212) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033893 / 0.037411 (-0.003518) | 0.094533 / 0.014526 (0.080008) | 0.105876 / 0.176557 (-0.070680) | 0.158675 / 0.737135 (-0.578460) | 0.107790 / 0.296338 (-0.188548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425796 / 0.215209 (0.210587) | 4.229159 / 2.077655 (2.151505) | 2.239613 / 1.504120 (0.735493) | 2.073830 / 1.541195 (0.532635) | 2.185508 / 1.468490 (0.717018) | 0.483984 / 4.584777 (-4.100793) | 3.645575 / 3.745712 (-0.100137) | 3.454767 / 5.269862 (-1.815095) | 2.141387 / 4.565676 (-2.424290) | 0.057570 / 0.424275 (-0.366705) | 0.007901 / 0.007607 (0.000294) | 0.501160 / 0.226044 (0.275116) | 5.012283 / 2.268929 (2.743355) | 2.701267 / 55.444624 (-52.743357) | 2.465409 / 6.876477 (-4.411068) | 2.696812 / 2.142072 (0.554739) | 0.587160 / 4.805227 (-4.218067) | 0.134175 / 6.500664 (-6.366489) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345632 / 1.841788 (-0.496155) | 21.077279 / 8.074308 (13.002971) | 14.700826 / 10.191392 (4.509434) | 0.156191 / 0.680424 (-0.524233) | 0.018991 / 0.534201 (-0.515210) | 0.400413 / 0.579283 (-0.178870) | 0.420597 / 0.434364 (-0.013767) | 0.486534 / 0.540337 (-0.053804) | 0.646606 / 1.386936 (-0.740330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5bb8fabb135ca8adf47151ad3de050e3a258ccab \"CML watermark\")\n" ]
2023-07-28T09:37:15
2023-07-28T10:16:11
2023-07-28T10:07:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6091", "html_url": "https://github.com/huggingface/datasets/pull/6091", "diff_url": "https://github.com/huggingface/datasets/pull/6091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6091.patch", "merged_at": "2023-07-28T10:07:02" }
Fix https://github.com/huggingface/datasets/issues/6087 (Colab installs 2023.6.0, so we should be good)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6091/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6090/comments
https://api.github.com/repos/huggingface/datasets/issues/6090/events
https://github.com/huggingface/datasets/issues/6090
1,825,865,043
I_kwDODunzps5s1H1T
6,090
FilesIterable skips all the files after a hidden file
{ "login": "dkrivosic", "id": 10785413, "node_id": "MDQ6VXNlcjEwNzg1NDEz", "avatar_url": "https://avatars.githubusercontent.com/u/10785413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkrivosic", "html_url": "https://github.com/dkrivosic", "followers_url": "https://api.github.com/users/dkrivosic/followers", "following_url": "https://api.github.com/users/dkrivosic/following{/other_user}", "gists_url": "https://api.github.com/users/dkrivosic/gists{/gist_id}", "starred_url": "https://api.github.com/users/dkrivosic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkrivosic/subscriptions", "organizations_url": "https://api.github.com/users/dkrivosic/orgs", "repos_url": "https://api.github.com/users/dkrivosic/repos", "events_url": "https://api.github.com/users/dkrivosic/events{/privacy}", "received_events_url": "https://api.github.com/users/dkrivosic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting. We've merged a PR with a fix." ]
2023-07-28T07:25:57
2023-07-28T10:51:14
2023-07-28T10:50:11
NONE
null
null
null
### Describe the bug When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file. The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`. ### Steps to reproduce the bug https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8- ### Expected behavior The script should print all the files except the hidden one. ### Environment info - `datasets` version: 2.14.1 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6090/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6089/comments
https://api.github.com/repos/huggingface/datasets/issues/6089/events
https://github.com/huggingface/datasets/issues/6089
1,825,761,476
I_kwDODunzps5s0ujE
6,089
AssertionError: daemonic processes are not allowed to have children
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).", "> We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).\r\n\r\nGreat! Download takes more time than extract, multiple threads can download in parallel, which can speed up a lot." ]
2023-07-28T06:04:00
2023-07-31T02:34:02
null
NONE
null
null
null
### Describe the bug When I load_dataset with num_proc > 0 in a deamon process, I got an error: ```python File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract return self.extract(self.download(url_or_urls)) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download downloaded_path_or_paths = map_nested( ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map return _map_with_multiprocessing_pool( ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool with Pool(num_proc, initargs=initargs, initializer=initializer) as pool: ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__ self._repopulate_pool() ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static w.start() File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^ AssertionError: daemonic processes are not allowed to have children ``` The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process. ### Steps to reproduce the bug 1. start a deamon process 2. run load_dataset with num_proc > 0 ### Expected behavior No error. ### Environment info Python 3.11.4 datasets latest master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6089/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6138/comments
https://api.github.com/repos/huggingface/datasets/issues/6138/events
https://github.com/huggingface/datasets/pull/6138
1,844,952,496
PR_kwDODunzps5XoH2V
6,138
Ignore CI lint rule violation in Pickler.memoize
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.003890 / 0.011008 (-0.007118) | 0.084044 / 0.038508 (0.045536) | 0.071893 / 0.023109 (0.048784) | 0.346926 / 0.275898 (0.071028) | 0.397487 / 0.323480 (0.074007) | 0.004065 / 0.007986 (-0.003921) | 0.003218 / 0.004328 (-0.001111) | 0.064670 / 0.004250 (0.060420) | 0.052414 / 0.037052 (0.015362) | 0.355413 / 0.258489 (0.096924) | 0.398894 / 0.293841 (0.105053) | 0.030763 / 0.128546 (-0.097783) | 0.008590 / 0.075646 (-0.067056) | 0.286857 / 0.419271 (-0.132415) | 0.051126 / 0.043533 (0.007593) | 0.346125 / 0.255139 (0.090986) | 0.395673 / 0.283200 (0.112474) | 0.025766 / 0.141683 (-0.115917) | 1.466238 / 1.452155 (0.014084) | 1.543117 / 1.492716 (0.050400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213210 / 0.018006 (0.195204) | 0.451981 / 0.000490 (0.451491) | 0.003784 / 0.000200 (0.003585) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027756 / 0.037411 (-0.009655) | 0.082446 / 0.014526 (0.067920) | 0.095414 / 0.176557 (-0.081142) | 0.151812 / 0.737135 (-0.585323) | 0.096296 / 0.296338 (-0.200042) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383729 / 0.215209 (0.168520) | 3.835126 / 2.077655 (1.757471) | 1.891972 / 1.504120 (0.387852) | 1.719934 / 1.541195 (0.178739) | 1.899980 / 1.468490 (0.431490) | 0.488741 / 4.584777 (-4.096036) | 3.634120 / 3.745712 (-0.111592) | 3.243314 / 5.269862 (-2.026547) | 2.028382 / 4.565676 (-2.537294) | 0.057355 / 0.424275 (-0.366920) | 0.007717 / 0.007607 (0.000110) | 0.459835 / 0.226044 (0.233790) | 4.591793 / 2.268929 (2.322864) | 2.346861 / 55.444624 (-53.097764) | 2.067357 / 6.876477 (-4.809120) | 2.254954 / 2.142072 (0.112882) | 0.587016 / 4.805227 (-4.218211) | 0.133918 / 6.500664 (-6.366746) | 0.060311 / 0.075469 (-0.015158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250016 / 1.841788 (-0.591772) | 19.674333 / 8.074308 (11.600025) | 14.522764 / 10.191392 (4.331372) | 0.145741 / 0.680424 (-0.534683) | 0.018593 / 0.534201 (-0.515608) | 0.392833 / 0.579283 (-0.186450) | 0.408194 / 0.434364 (-0.026170) | 0.455164 / 0.540337 (-0.085174) | 0.622722 / 1.386936 (-0.764214) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006583 / 0.011353 (-0.004770) | 0.004008 / 0.011008 (-0.007000) | 0.064688 / 0.038508 (0.026180) | 0.074969 / 0.023109 (0.051860) | 0.360504 / 0.275898 (0.084606) | 0.396926 / 0.323480 (0.073446) | 0.005190 / 0.007986 (-0.002796) | 0.003363 / 0.004328 (-0.000966) | 0.064372 / 0.004250 (0.060122) | 0.054428 / 0.037052 (0.017376) | 0.361204 / 0.258489 (0.102715) | 0.400917 / 0.293841 (0.107077) | 0.031117 / 0.128546 (-0.097429) | 0.008406 / 0.075646 (-0.067241) | 0.069655 / 0.419271 (-0.349617) | 0.048582 / 0.043533 (0.005049) | 0.365396 / 0.255139 (0.110257) | 0.381344 / 0.283200 (0.098145) | 0.023809 / 0.141683 (-0.117874) | 1.472926 / 1.452155 (0.020772) | 1.547298 / 1.492716 (0.054582) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276912 / 0.018006 (0.258906) | 0.449096 / 0.000490 (0.448607) | 0.018921 / 0.000200 (0.018721) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030237 / 0.037411 (-0.007174) | 0.088610 / 0.014526 (0.074084) | 0.101529 / 0.176557 (-0.075027) | 0.154070 / 0.737135 (-0.583065) | 0.103471 / 0.296338 (-0.192867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416047 / 0.215209 (0.200838) | 4.152374 / 2.077655 (2.074719) | 2.111181 / 1.504120 (0.607061) | 1.943582 / 1.541195 (0.402387) | 2.031729 / 1.468490 (0.563239) | 0.486740 / 4.584777 (-4.098037) | 3.631547 / 3.745712 (-0.114165) | 3.251202 / 5.269862 (-2.018660) | 2.041272 / 4.565676 (-2.524405) | 0.057287 / 0.424275 (-0.366988) | 0.007303 / 0.007607 (-0.000304) | 0.491027 / 0.226044 (0.264982) | 4.906757 / 2.268929 (2.637829) | 2.581694 / 55.444624 (-52.862931) | 2.250996 / 6.876477 (-4.625481) | 2.441771 / 2.142072 (0.299698) | 0.600714 / 4.805227 (-4.204514) | 0.133233 / 6.500664 (-6.367431) | 0.060856 / 0.075469 (-0.014613) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340062 / 1.841788 (-0.501725) | 19.973899 / 8.074308 (11.899591) | 14.347381 / 10.191392 (4.155989) | 0.166651 / 0.680424 (-0.513773) | 0.018691 / 0.534201 (-0.515510) | 0.393580 / 0.579283 (-0.185703) | 0.409425 / 0.434364 (-0.024939) | 0.474409 / 0.540337 (-0.065929) | 0.649423 / 1.386936 (-0.737514) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5da68102297c3639207a7901952d2765a4cdb8b \"CML watermark\")\n", "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6138). All of your documentation changes will be reflected on that endpoint." ]
2023-08-10T11:03:15
2023-08-10T11:10:42
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6138", "html_url": "https://github.com/huggingface/datasets/pull/6138", "diff_url": "https://github.com/huggingface/datasets/pull/6138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6138.patch", "merged_at": null }
This PR ignores the violation of the lint rule E721 in `Pickler.memoize`. The lint rule violation was introduced in this PR: - #3182 @lhoestq is there a reason you did not use `isinstance` instead? As a hotfix, we just ignore the violation of the lint rule. Fix #6136.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6138/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6137/comments
https://api.github.com/repos/huggingface/datasets/issues/6137/events
https://github.com/huggingface/datasets/issues/6137
1,844,952,312
I_kwDODunzps5t97z4
6,137
(`from_spark()`) Unable to connect HDFS in pyspark YARN setting
{ "login": "kyoungrok0517", "id": 1051900, "node_id": "MDQ6VXNlcjEwNTE5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoungrok0517", "html_url": "https://github.com/kyoungrok0517", "followers_url": "https://api.github.com/users/kyoungrok0517/followers", "following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}", "gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions", "organizations_url": "https://api.github.com/users/kyoungrok0517/orgs", "repos_url": "https://api.github.com/users/kyoungrok0517/repos", "events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoungrok0517/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T11:03:08
2023-08-10T11:03:08
null
NONE
null
null
null
### Describe the bug related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613 --- Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library that relies on pyarrow to communicate with HDFS. The `from_spark()` ([link](https://huggingface.co/docs/datasets/use_with_spark#load-from-spark)) is what I'm invoking in my script. Below is the error I'm encountering. Note that I've masked sensitive paths. My code is sent to worker containers (docker) from driver container then executed. I confirmed that in both driver and worker images I can connect to HDFS using pyarrow since the envs and required jars are properly set, but strangely that becomes impossible when the same image runs as remote worker process. These are some peculiarities in my environment that might caused this issue. * **Cluster requires kerberos authentication** * But I think the error message implies that's not the problem in this case * **The user that runs the worker process is different from that built the docker image** * To avoid permission-related issues I made all directories that are accessed from the script accessible to everyone * **Pyspark-part of my code has no problem interacting with HDFS.** * Even pyarrow doesn't experience problem when I run the code in interactive session of the same docker images (driver, worker) * The problem occurs only when it runs as cluster's worker runtime Hope I could get some help. Thanks. ```bash 2023-08-08 18:51:19,638 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-08-08 18:51:20,280 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 23/08/08 18:51:22 WARN TaskSetManager: Lost task 0.0 in stage 142.0 (TID 9732) (ac3bax2062.bdp.bdata.ai executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:24 WARN TaskSetManager: Lost task 0.1 in stage 142.0 (TID 9733) (ac3iax2079.bdp.bdata.ai executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:38 WARN TaskSetManager: Lost task 0.2 in stage 142.0 (TID 9734) (<MASKED> executor 4): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` ### Steps to reproduce the bug Use `from_spark()` function in pyspark YARN setting. I set `cache_dir` to HDFS path. ### Expected behavior Work as described in document ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6137/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6136/comments
https://api.github.com/repos/huggingface/datasets/issues/6136/events
https://github.com/huggingface/datasets/issues/6136
1,844,887,866
I_kwDODunzps5t9sE6
6,136
CI check_code_quality error: E721 Do not compare types, use `isinstance()`
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2023-08-10T10:19:50
2023-08-10T10:19:50
null
MEMBER
null
null
null
After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error: ``` src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6136/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6135/comments
https://api.github.com/repos/huggingface/datasets/issues/6135/events
https://github.com/huggingface/datasets/pull/6135
1,844,870,943
PR_kwDODunzps5Xn2AT
6,135
Remove unused allowed_extensions param
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6135). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009055 / 0.011353 (-0.002298) | 0.008835 / 0.011008 (-0.002173) | 0.117048 / 0.038508 (0.078540) | 0.096268 / 0.023109 (0.073159) | 0.474678 / 0.275898 (0.198780) | 0.550509 / 0.323480 (0.227029) | 0.005552 / 0.007986 (-0.002434) | 0.004315 / 0.004328 (-0.000013) | 0.094336 / 0.004250 (0.090086) | 0.061945 / 0.037052 (0.024892) | 0.461422 / 0.258489 (0.202933) | 0.521271 / 0.293841 (0.227430) | 0.049116 / 0.128546 (-0.079430) | 0.015007 / 0.075646 (-0.060639) | 0.414351 / 0.419271 (-0.004920) | 0.137520 / 0.043533 (0.093987) | 0.465627 / 0.255139 (0.210488) | 0.537244 / 0.283200 (0.254044) | 0.068577 / 0.141683 (-0.073106) | 1.921373 / 1.452155 (0.469219) | 2.506653 / 1.492716 (1.013937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273970 / 0.018006 (0.255963) | 0.750295 / 0.000490 (0.749805) | 0.004241 / 0.000200 (0.004041) | 0.000128 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033793 / 0.037411 (-0.003618) | 0.105562 / 0.014526 (0.091037) | 0.131771 / 0.176557 (-0.044786) | 0.196890 / 0.737135 (-0.540245) | 0.119842 / 0.296338 (-0.176496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634881 / 0.215209 (0.419672) | 6.069221 / 2.077655 (3.991566) | 2.678765 / 1.504120 (1.174646) | 2.460309 / 1.541195 (0.919114) | 2.517579 / 1.468490 (1.049089) | 0.869558 / 4.584777 (-3.715219) | 5.407686 / 3.745712 (1.661974) | 4.920687 / 5.269862 (-0.349175) | 3.130066 / 4.565676 (-1.435611) | 0.100337 / 0.424275 (-0.323938) | 0.009615 / 0.007607 (0.002008) | 0.745275 / 0.226044 (0.519231) | 7.577890 / 2.268929 (5.308962) | 3.607887 / 55.444624 (-51.836738) | 2.922211 / 6.876477 (-3.954266) | 3.205592 / 2.142072 (1.063519) | 1.052298 / 4.805227 (-3.752929) | 0.218798 / 6.500664 (-6.281866) | 0.082137 / 0.075469 (0.006667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696551 / 1.841788 (-0.145237) | 24.946074 / 8.074308 (16.871766) | 23.114202 / 10.191392 (12.922810) | 0.220498 / 0.680424 (-0.459925) | 0.029388 / 0.534201 (-0.504813) | 0.494721 / 0.579283 (-0.084562) | 0.603085 / 0.434364 (0.168722) | 0.573093 / 0.540337 (0.032756) | 0.784937 / 1.386936 (-0.601999) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009642 / 0.011353 (-0.001711) | 0.007551 / 0.011008 (-0.003457) | 0.085224 / 0.038508 (0.046716) | 0.099493 / 0.023109 (0.076384) | 0.503824 / 0.275898 (0.227926) | 0.546583 / 0.323480 (0.223103) | 0.006385 / 0.007986 (-0.001601) | 0.004751 / 0.004328 (0.000423) | 0.084699 / 0.004250 (0.080449) | 0.067875 / 0.037052 (0.030823) | 0.485313 / 0.258489 (0.226824) | 0.535808 / 0.293841 (0.241967) | 0.049935 / 0.128546 (-0.078611) | 0.014427 / 0.075646 (-0.061219) | 0.095531 / 0.419271 (-0.323741) | 0.068487 / 0.043533 (0.024954) | 0.502204 / 0.255139 (0.247065) | 0.514393 / 0.283200 (0.231193) | 0.037350 / 0.141683 (-0.104333) | 1.849380 / 1.452155 (0.397226) | 1.920151 / 1.492716 (0.427434) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298363 / 0.018006 (0.280357) | 0.651555 / 0.000490 (0.651065) | 0.005910 / 0.000200 (0.005710) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039170 / 0.037411 (0.001758) | 0.106436 / 0.014526 (0.091910) | 0.129880 / 0.176557 (-0.046677) | 0.185401 / 0.737135 (-0.551734) | 0.125732 / 0.296338 (-0.170607) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643248 / 0.215209 (0.428039) | 6.374807 / 2.077655 (4.297152) | 3.057296 / 1.504120 (1.553176) | 2.779534 / 1.541195 (1.238340) | 2.790165 / 1.468490 (1.321675) | 0.841580 / 4.584777 (-3.743197) | 5.371478 / 3.745712 (1.625766) | 4.973251 / 5.269862 (-0.296610) | 3.235817 / 4.565676 (-1.329860) | 0.097276 / 0.424275 (-0.326999) | 0.008840 / 0.007607 (0.001233) | 0.728678 / 0.226044 (0.502634) | 7.526382 / 2.268929 (5.257454) | 3.792550 / 55.444624 (-51.652074) | 3.439134 / 6.876477 (-3.437342) | 3.466626 / 2.142072 (1.324553) | 1.035894 / 4.805227 (-3.769333) | 0.211670 / 6.500664 (-6.288994) | 0.087596 / 0.075469 (0.012127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.782755 / 1.841788 (-0.059033) | 25.704407 / 8.074308 (17.630099) | 23.799672 / 10.191392 (13.608280) | 0.233952 / 0.680424 (-0.446472) | 0.030810 / 0.534201 (-0.503391) | 0.505857 / 0.579283 (-0.073426) | 0.629331 / 0.434364 (0.194967) | 0.608530 / 0.540337 (0.068192) | 0.813688 / 1.386936 (-0.573248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed4d6bb5f1331576c41b04acd9872a5349a0915c \"CML watermark\")\n" ]
2023-08-10T10:09:54
2023-08-10T10:22:54
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6135", "html_url": "https://github.com/huggingface/datasets/pull/6135", "diff_url": "https://github.com/huggingface/datasets/pull/6135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6135.patch", "merged_at": null }
This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6135/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6134/comments
https://api.github.com/repos/huggingface/datasets/issues/6134/events
https://github.com/huggingface/datasets/issues/6134
1,844,535,142
I_kwDODunzps5t8V9m
6,134
`datasets` cannot be installed alongside `apache-beam`
{ "login": "boyleconnor", "id": 6520892, "node_id": "MDQ6VXNlcjY1MjA4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boyleconnor", "html_url": "https://github.com/boyleconnor", "followers_url": "https://api.github.com/users/boyleconnor/followers", "following_url": "https://api.github.com/users/boyleconnor/following{/other_user}", "gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}", "starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions", "organizations_url": "https://api.github.com/users/boyleconnor/orgs", "repos_url": "https://api.github.com/users/boyleconnor/repos", "events_url": "https://api.github.com/users/boyleconnor/events{/privacy}", "received_events_url": "https://api.github.com/users/boyleconnor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T06:54:32
2023-08-10T06:55:46
null
NONE
null
null
null
### Describe the bug If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to something such as importing the `load_dataset` method from `datasets` results in a crashing error. I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions. ### Steps to reproduce the bug See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug. In some environments, I have been able to reproduce the bug by running the following in Bash: ```bash $ pip install datasets apache-beam ``` then the following in a Python shell: ```python from datasets import load_dataset ``` Here is my stacktrace from running on Google Colab: <details> <summary>stacktrace</summary> ``` [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.14.4" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 64 65 from . import config ---> 66 from .arrow_reader import ArrowReader 67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 68 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 28 import pyarrow.parquet as pq 29 ---> 30 from .download.download_config import DownloadConfig 31 from .naming import _split_re, filenames_for_dataset_split 32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables [/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module> 7 8 from .download_config import DownloadConfig ----> 9 from .download_manager import DownloadManager, DownloadMode 10 from .streaming_download_manager import StreamingDownloadManager [/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module> 33 from ..utils.info_utils import get_size_checksum_dict 34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm ---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str 36 from .download_config import DownloadConfig 37 [/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module> 38 import dill 39 import multiprocess ---> 40 import multiprocess.pool 41 import numpy as np 42 from packaging import version [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module> 607 # 608 --> 609 class ThreadPool(Pool): 610 611 from .dummy import Process [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool() 609 class ThreadPool(Pool): 610 --> 611 from .dummy import Process 612 613 def __init__(self, processes=None, initializer=None, initargs=()): [/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module> 85 # 86 ---> 87 class Condition(threading._Condition): 88 # XXX 89 if sys.version_info < (3, 0): AttributeError: module 'threading' has no attribute '_Condition' ``` </details> I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes the installer to hang indefinitely. ### Expected behavior I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`. ### Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6134/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6133/comments
https://api.github.com/repos/huggingface/datasets/issues/6133/events
https://github.com/huggingface/datasets/issues/6133
1,844,511,519
I_kwDODunzps5t8QMf
6,133
Dataset is slower after calling `to_iterable_dataset`
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T06:36:23
2023-08-10T06:36:23
null
NONE
null
null
null
### Describe the bug Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset` ### Steps to reproduce the bug Any dataset after converting to `IterableDataset` ### Expected behavior Maybe it should be faster on big dataset? I only test on small dataset ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6133/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6132/comments
https://api.github.com/repos/huggingface/datasets/issues/6132/events
https://github.com/huggingface/datasets/issues/6132
1,843,491,020
I_kwDODunzps5t4XDM
6,132
to_iterable_dataset is missing in document
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-09T15:15:03
2023-08-09T15:15:03
null
NONE
null
null
null
### Describe the bug to_iterable_dataset is missing in document ### Steps to reproduce the bug to_iterable_dataset is missing in document ### Expected behavior document enhancement ### Environment info unrelated
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6132/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6131/comments
https://api.github.com/repos/huggingface/datasets/issues/6131/events
https://github.com/huggingface/datasets/issues/6131
1,843,448,643
I_kwDODunzps5t4MtD
6,131
AttributeError: type object 'tqdm' has no attribute '_lock'
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-09T14:53:31
2023-08-09T14:54:36
null
CONTRIBUTOR
null
null
null
### Describe the bug Getting a tqdm issue when writing a Dask dataframe to the hub. Similar to #6066. Using latest Datasets version doesn't seem to resolve it ### Steps to reproduce the bug This is a minimal reproducer: ``` import dask.dataframe as dd import pandas as pd import random import huggingface_hub data = {"number": [random.randint(0,10) for _ in range(1000)]} df = pd.DataFrame.from_dict(data) dataframe = dd.from_pandas(df, npartitions=1) dataframe = dataframe.repartition(npartitions=2) repo_id = "nielsr/test-dask" repo_path = f"hf://datasets/{repo_id}" huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True) dd.to_parquet(dataframe, path=f"{repo_path}/data") ``` Note: I'm intentionally repartioning the Dask dataframe to 2 partitions, as it does work when only having one partition. ### Expected behavior Would expect to write to the hub without any problem. ### Environment info Datasets version 2.14.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6131/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6130/comments
https://api.github.com/repos/huggingface/datasets/issues/6130/events
https://github.com/huggingface/datasets/issues/6130
1,843,158,846
I_kwDODunzps5t3F8-
6,130
default config name doesn't work when config kwargs are specified.
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-09T12:43:15
2023-08-09T12:43:15
null
NONE
null
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522 If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs. ### Steps to reproduce the bug ```python import datasets datasets.load_dataset('/dataset/with/multiple/config'') # Ok datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err ``` ### Expected behavior Default config behavior should be consistent. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6130/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6129/comments
https://api.github.com/repos/huggingface/datasets/issues/6129/events
https://github.com/huggingface/datasets/pull/6129
1,841,563,517
PR_kwDODunzps5Xcmuw
6,129
Release 2.14.4
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006053 / 0.011353 (-0.005299) | 0.003532 / 0.011008 (-0.007476) | 0.081930 / 0.038508 (0.043422) | 0.059043 / 0.023109 (0.035934) | 0.322785 / 0.275898 (0.046887) | 0.378158 / 0.323480 (0.054678) | 0.004709 / 0.007986 (-0.003277) | 0.002907 / 0.004328 (-0.001421) | 0.061516 / 0.004250 (0.057266) | 0.047209 / 0.037052 (0.010157) | 0.346885 / 0.258489 (0.088396) | 0.381011 / 0.293841 (0.087170) | 0.027491 / 0.128546 (-0.101055) | 0.008014 / 0.075646 (-0.067632) | 0.260663 / 0.419271 (-0.158608) | 0.045427 / 0.043533 (0.001894) | 0.315277 / 0.255139 (0.060138) | 0.377902 / 0.283200 (0.094703) | 0.021371 / 0.141683 (-0.120311) | 1.416350 / 1.452155 (-0.035804) | 1.483345 / 1.492716 (-0.009372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203660 / 0.018006 (0.185654) | 0.569081 / 0.000490 (0.568591) | 0.002742 / 0.000200 (0.002542) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023456 / 0.037411 (-0.013955) | 0.073954 / 0.014526 (0.059428) | 0.082991 / 0.176557 (-0.093566) | 0.144781 / 0.737135 (-0.592354) | 0.083346 / 0.296338 (-0.212992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391542 / 0.215209 (0.176333) | 3.909505 / 2.077655 (1.831850) | 1.862234 / 1.504120 (0.358114) | 1.676076 / 1.541195 (0.134881) | 1.727595 / 1.468490 (0.259105) | 0.501769 / 4.584777 (-4.083008) | 3.083697 / 3.745712 (-0.662016) | 2.819751 / 5.269862 (-2.450111) | 1.867265 / 4.565676 (-2.698411) | 0.057575 / 0.424275 (-0.366700) | 0.006478 / 0.007607 (-0.001129) | 0.466684 / 0.226044 (0.240640) | 4.657982 / 2.268929 (2.389054) | 2.347052 / 55.444624 (-53.097573) | 1.964688 / 6.876477 (-4.911789) | 2.077821 / 2.142072 (-0.064252) | 0.590591 / 4.805227 (-4.214636) | 0.124585 / 6.500664 (-6.376079) | 0.059468 / 0.075469 (-0.016001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223484 / 1.841788 (-0.618304) | 18.104638 / 8.074308 (10.030330) | 13.755126 / 10.191392 (3.563734) | 0.143158 / 0.680424 (-0.537266) | 0.017147 / 0.534201 (-0.517054) | 0.337427 / 0.579283 (-0.241856) | 0.352270 / 0.434364 (-0.082094) | 0.383718 / 0.540337 (-0.156619) | 0.534973 / 1.386936 (-0.851963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006039 / 0.011353 (-0.005314) | 0.003735 / 0.011008 (-0.007274) | 0.061954 / 0.038508 (0.023446) | 0.061786 / 0.023109 (0.038677) | 0.429420 / 0.275898 (0.153522) | 0.457629 / 0.323480 (0.134149) | 0.004748 / 0.007986 (-0.003237) | 0.002843 / 0.004328 (-0.001485) | 0.061811 / 0.004250 (0.057560) | 0.048740 / 0.037052 (0.011687) | 0.430066 / 0.258489 (0.171577) | 0.465971 / 0.293841 (0.172130) | 0.027577 / 0.128546 (-0.100969) | 0.007981 / 0.075646 (-0.067665) | 0.067580 / 0.419271 (-0.351692) | 0.042058 / 0.043533 (-0.001475) | 0.428412 / 0.255139 (0.173273) | 0.451054 / 0.283200 (0.167855) | 0.020850 / 0.141683 (-0.120833) | 1.453907 / 1.452155 (0.001752) | 1.509914 / 1.492716 (0.017197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237713 / 0.018006 (0.219707) | 0.418064 / 0.000490 (0.417575) | 0.006411 / 0.000200 (0.006211) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024950 / 0.037411 (-0.012462) | 0.076806 / 0.014526 (0.062281) | 0.085237 / 0.176557 (-0.091320) | 0.137940 / 0.737135 (-0.599196) | 0.086266 / 0.296338 (-0.210072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418666 / 0.215209 (0.203457) | 4.160547 / 2.077655 (2.082893) | 2.135671 / 1.504120 (0.631551) | 1.964985 / 1.541195 (0.423790) | 2.009447 / 1.468490 (0.540957) | 0.501377 / 4.584777 (-4.083400) | 3.064293 / 3.745712 (-0.681419) | 2.827153 / 5.269862 (-2.442709) | 1.854698 / 4.565676 (-2.710978) | 0.057662 / 0.424275 (-0.366613) | 0.006829 / 0.007607 (-0.000778) | 0.496730 / 0.226044 (0.270686) | 4.964663 / 2.268929 (2.695735) | 2.583133 / 55.444624 (-52.861491) | 2.329700 / 6.876477 (-4.546776) | 2.415521 / 2.142072 (0.273449) | 0.591973 / 4.805227 (-4.213255) | 0.126801 / 6.500664 (-6.373863) | 0.062811 / 0.075469 (-0.012659) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348575 / 1.841788 (-0.493212) | 18.282861 / 8.074308 (10.208553) | 13.734056 / 10.191392 (3.542664) | 0.154987 / 0.680424 (-0.525437) | 0.016996 / 0.534201 (-0.517205) | 0.335264 / 0.579283 (-0.244019) | 0.356907 / 0.434364 (-0.077456) | 0.399185 / 0.540337 (-0.141152) | 0.540209 / 1.386936 (-0.846727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#887bef1217e0f4441d57bf0f4d1e806df12f2c50 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006768 / 0.011353 (-0.004585) | 0.004250 / 0.011008 (-0.006758) | 0.086780 / 0.038508 (0.048272) | 0.080872 / 0.023109 (0.057762) | 0.309281 / 0.275898 (0.033383) | 0.352293 / 0.323480 (0.028814) | 0.005604 / 0.007986 (-0.002382) | 0.003544 / 0.004328 (-0.000784) | 0.066910 / 0.004250 (0.062659) | 0.055568 / 0.037052 (0.018516) | 0.314931 / 0.258489 (0.056442) | 0.366026 / 0.293841 (0.072185) | 0.031247 / 0.128546 (-0.097300) | 0.008860 / 0.075646 (-0.066786) | 0.293210 / 0.419271 (-0.126061) | 0.052868 / 0.043533 (0.009335) | 0.316769 / 0.255139 (0.061630) | 0.352128 / 0.283200 (0.068929) | 0.025492 / 0.141683 (-0.116190) | 1.478379 / 1.452155 (0.026224) | 1.573652 / 1.492716 (0.080936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294975 / 0.018006 (0.276968) | 0.615093 / 0.000490 (0.614603) | 0.004279 / 0.000200 (0.004079) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031557 / 0.037411 (-0.005855) | 0.085026 / 0.014526 (0.070500) | 0.101221 / 0.176557 (-0.075336) | 0.157432 / 0.737135 (-0.579703) | 0.102350 / 0.296338 (-0.193988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384158 / 0.215209 (0.168949) | 3.826656 / 2.077655 (1.749001) | 1.873510 / 1.504120 (0.369390) | 1.721913 / 1.541195 (0.180718) | 1.848779 / 1.468490 (0.380289) | 0.485128 / 4.584777 (-4.099649) | 3.656660 / 3.745712 (-0.089052) | 3.441964 / 5.269862 (-1.827898) | 2.150611 / 4.565676 (-2.415066) | 0.056869 / 0.424275 (-0.367406) | 0.007382 / 0.007607 (-0.000225) | 0.458751 / 0.226044 (0.232707) | 4.585028 / 2.268929 (2.316099) | 2.439538 / 55.444624 (-53.005086) | 2.116959 / 6.876477 (-4.759518) | 2.459220 / 2.142072 (0.317147) | 0.580907 / 4.805227 (-4.224321) | 0.134502 / 6.500664 (-6.366162) | 0.062528 / 0.075469 (-0.012941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251006 / 1.841788 (-0.590782) | 20.755849 / 8.074308 (12.681541) | 14.456950 / 10.191392 (4.265558) | 0.167074 / 0.680424 (-0.513350) | 0.018482 / 0.534201 (-0.515719) | 0.395867 / 0.579283 (-0.183416) | 0.415620 / 0.434364 (-0.018744) | 0.462247 / 0.540337 (-0.078090) | 0.645762 / 1.386936 (-0.741174) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007050 / 0.011353 (-0.004303) | 0.004421 / 0.011008 (-0.006587) | 0.065312 / 0.038508 (0.026804) | 0.089790 / 0.023109 (0.066681) | 0.366318 / 0.275898 (0.090420) | 0.403542 / 0.323480 (0.080062) | 0.005695 / 0.007986 (-0.002290) | 0.003642 / 0.004328 (-0.000687) | 0.064540 / 0.004250 (0.060289) | 0.060933 / 0.037052 (0.023881) | 0.369004 / 0.258489 (0.110515) | 0.408056 / 0.293841 (0.114215) | 0.032124 / 0.128546 (-0.096422) | 0.008960 / 0.075646 (-0.066686) | 0.071267 / 0.419271 (-0.348005) | 0.049745 / 0.043533 (0.006212) | 0.367203 / 0.255139 (0.112064) | 0.383009 / 0.283200 (0.099809) | 0.025330 / 0.141683 (-0.116353) | 1.518290 / 1.452155 (0.066135) | 1.581738 / 1.492716 (0.089022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.338281 / 0.018006 (0.320275) | 0.538195 / 0.000490 (0.537706) | 0.008498 / 0.000200 (0.008298) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033279 / 0.037411 (-0.004133) | 0.093233 / 0.014526 (0.078707) | 0.106019 / 0.176557 (-0.070538) | 0.161262 / 0.737135 (-0.575874) | 0.109935 / 0.296338 (-0.186404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411563 / 0.215209 (0.196354) | 4.102149 / 2.077655 (2.024495) | 2.108513 / 1.504120 (0.604393) | 1.945344 / 1.541195 (0.404150) | 2.066964 / 1.468490 (0.598474) | 0.482771 / 4.584777 (-4.102006) | 3.659160 / 3.745712 (-0.086552) | 3.420833 / 5.269862 (-1.849029) | 2.147276 / 4.565676 (-2.418400) | 0.056957 / 0.424275 (-0.367318) | 0.007898 / 0.007607 (0.000290) | 0.482401 / 0.226044 (0.256357) | 4.821044 / 2.268929 (2.552115) | 2.567993 / 55.444624 (-52.876631) | 2.336165 / 6.876477 (-4.540312) | 2.545066 / 2.142072 (0.402994) | 0.580888 / 4.805227 (-4.224339) | 0.134092 / 6.500664 (-6.366572) | 0.062681 / 0.075469 (-0.012788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.379124 / 1.841788 (-0.462664) | 21.627949 / 8.074308 (13.553641) | 15.064818 / 10.191392 (4.873426) | 0.169707 / 0.680424 (-0.510716) | 0.018671 / 0.534201 (-0.515530) | 0.400496 / 0.579283 (-0.178787) | 0.415542 / 0.434364 (-0.018822) | 0.484351 / 0.540337 (-0.055986) | 0.646046 / 1.386936 (-0.740890) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004436 / 0.011008 (-0.006572) | 0.087422 / 0.038508 (0.048914) | 0.085996 / 0.023109 (0.062887) | 0.311772 / 0.275898 (0.035873) | 0.353281 / 0.323480 (0.029801) | 0.004562 / 0.007986 (-0.003423) | 0.003840 / 0.004328 (-0.000488) | 0.066500 / 0.004250 (0.062250) | 0.061293 / 0.037052 (0.024241) | 0.328840 / 0.258489 (0.070351) | 0.365587 / 0.293841 (0.071746) | 0.031802 / 0.128546 (-0.096744) | 0.008881 / 0.075646 (-0.066765) | 0.289671 / 0.419271 (-0.129601) | 0.053348 / 0.043533 (0.009816) | 0.307822 / 0.255139 (0.052683) | 0.342559 / 0.283200 (0.059360) | 0.025760 / 0.141683 (-0.115923) | 1.509944 / 1.452155 (0.057789) | 1.556634 / 1.492716 (0.063918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282036 / 0.018006 (0.264029) | 0.608350 / 0.000490 (0.607860) | 0.004843 / 0.000200 (0.004643) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029810 / 0.037411 (-0.007601) | 0.086215 / 0.014526 (0.071689) | 0.102200 / 0.176557 (-0.074356) | 0.158051 / 0.737135 (-0.579084) | 0.103083 / 0.296338 (-0.193255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392119 / 0.215209 (0.176910) | 3.895796 / 2.077655 (1.818141) | 1.921118 / 1.504120 (0.416998) | 1.754271 / 1.541195 (0.213076) | 1.880991 / 1.468490 (0.412501) | 0.481158 / 4.584777 (-4.103618) | 3.609210 / 3.745712 (-0.136502) | 3.412018 / 5.269862 (-1.857843) | 2.131710 / 4.565676 (-2.433967) | 0.057122 / 0.424275 (-0.367153) | 0.007444 / 0.007607 (-0.000163) | 0.468880 / 0.226044 (0.242835) | 4.682441 / 2.268929 (2.413512) | 2.505613 / 55.444624 (-52.939012) | 2.149655 / 6.876477 (-4.726822) | 2.465904 / 2.142072 (0.323832) | 0.578877 / 4.805227 (-4.226350) | 0.133504 / 6.500664 (-6.367160) | 0.061422 / 0.075469 (-0.014047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269395 / 1.841788 (-0.572393) | 21.107558 / 8.074308 (13.033250) | 15.318502 / 10.191392 (5.127110) | 0.165273 / 0.680424 (-0.515151) | 0.018783 / 0.534201 (-0.515418) | 0.396259 / 0.579283 (-0.183024) | 0.412907 / 0.434364 (-0.021457) | 0.465723 / 0.540337 (-0.074615) | 0.638414 / 1.386936 (-0.748522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007083 / 0.011353 (-0.004270) | 0.004216 / 0.011008 (-0.006793) | 0.065362 / 0.038508 (0.026854) | 0.095454 / 0.023109 (0.072345) | 0.364220 / 0.275898 (0.088322) | 0.417650 / 0.323480 (0.094170) | 0.006114 / 0.007986 (-0.001872) | 0.003577 / 0.004328 (-0.000751) | 0.064830 / 0.004250 (0.060579) | 0.062535 / 0.037052 (0.025483) | 0.381844 / 0.258489 (0.123355) | 0.418996 / 0.293841 (0.125155) | 0.031386 / 0.128546 (-0.097160) | 0.008913 / 0.075646 (-0.066733) | 0.070860 / 0.419271 (-0.348411) | 0.049132 / 0.043533 (0.005599) | 0.360406 / 0.255139 (0.105267) | 0.392407 / 0.283200 (0.109207) | 0.024611 / 0.141683 (-0.117072) | 1.509051 / 1.452155 (0.056896) | 1.570288 / 1.492716 (0.077572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368611 / 0.018006 (0.350605) | 0.537587 / 0.000490 (0.537098) | 0.028056 / 0.000200 (0.027856) | 0.000317 / 0.000054 (0.000262) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031570 / 0.037411 (-0.005841) | 0.088985 / 0.014526 (0.074460) | 0.105268 / 0.176557 (-0.071288) | 0.156724 / 0.737135 (-0.580412) | 0.105266 / 0.296338 (-0.191073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413861 / 0.215209 (0.198652) | 4.127001 / 2.077655 (2.049347) | 2.112114 / 1.504120 (0.607994) | 1.945200 / 1.541195 (0.404005) | 2.083031 / 1.468490 (0.614540) | 0.488086 / 4.584777 (-4.096691) | 3.565584 / 3.745712 (-0.180128) | 3.380782 / 5.269862 (-1.889079) | 2.103481 / 4.565676 (-2.462195) | 0.058203 / 0.424275 (-0.366072) | 0.007996 / 0.007607 (0.000389) | 0.487986 / 0.226044 (0.261941) | 4.871023 / 2.268929 (2.602095) | 2.584632 / 55.444624 (-52.859992) | 2.240103 / 6.876477 (-4.636374) | 2.555165 / 2.142072 (0.413092) | 0.591950 / 4.805227 (-4.213278) | 0.134919 / 6.500664 (-6.365745) | 0.062868 / 0.075469 (-0.012601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369731 / 1.841788 (-0.472057) | 21.497888 / 8.074308 (13.423580) | 14.555054 / 10.191392 (4.363662) | 0.168768 / 0.680424 (-0.511656) | 0.018837 / 0.534201 (-0.515364) | 0.394512 / 0.579283 (-0.184771) | 0.405459 / 0.434364 (-0.028905) | 0.475479 / 0.540337 (-0.064858) | 0.631994 / 1.386936 (-0.754942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002280) | 0.004894 / 0.011008 (-0.006114) | 0.108790 / 0.038508 (0.070282) | 0.081783 / 0.023109 (0.058674) | 0.381963 / 0.275898 (0.106064) | 0.450700 / 0.323480 (0.127220) | 0.006961 / 0.007986 (-0.001025) | 0.004035 / 0.004328 (-0.000293) | 0.081420 / 0.004250 (0.077169) | 0.058029 / 0.037052 (0.020976) | 0.437453 / 0.258489 (0.178964) | 0.472607 / 0.293841 (0.178766) | 0.048663 / 0.128546 (-0.079884) | 0.013512 / 0.075646 (-0.062134) | 0.406009 / 0.419271 (-0.013262) | 0.067616 / 0.043533 (0.024084) | 0.383641 / 0.255139 (0.128502) | 0.456734 / 0.283200 (0.173534) | 0.033391 / 0.141683 (-0.108292) | 1.753529 / 1.452155 (0.301375) | 1.859831 / 1.492716 (0.367115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215128 / 0.018006 (0.197122) | 0.538261 / 0.000490 (0.537771) | 0.005430 / 0.000200 (0.005230) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032664 / 0.037411 (-0.004748) | 0.093465 / 0.014526 (0.078939) | 0.106637 / 0.176557 (-0.069919) | 0.173642 / 0.737135 (-0.563494) | 0.113944 / 0.296338 (-0.182394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629212 / 0.215209 (0.414003) | 6.116729 / 2.077655 (4.039075) | 2.818000 / 1.504120 (1.313880) | 2.515317 / 1.541195 (0.974122) | 2.466588 / 1.468490 (0.998098) | 0.850815 / 4.584777 (-3.733962) | 5.051292 / 3.745712 (1.305579) | 4.472138 / 5.269862 (-0.797724) | 2.968317 / 4.565676 (-1.597360) | 0.100173 / 0.424275 (-0.324102) | 0.008407 / 0.007607 (0.000800) | 0.743972 / 0.226044 (0.517928) | 7.397619 / 2.268929 (5.128690) | 3.596681 / 55.444624 (-51.847943) | 2.854674 / 6.876477 (-4.021803) | 3.114274 / 2.142072 (0.972201) | 1.064879 / 4.805227 (-3.740348) | 0.215981 / 6.500664 (-6.284683) | 0.078159 / 0.075469 (0.002690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543291 / 1.841788 (-0.298497) | 23.244641 / 8.074308 (15.170333) | 20.784610 / 10.191392 (10.593218) | 0.222002 / 0.680424 (-0.458422) | 0.028584 / 0.534201 (-0.505617) | 0.478563 / 0.579283 (-0.100720) | 0.556101 / 0.434364 (0.121737) | 0.547446 / 0.540337 (0.007109) | 0.764318 / 1.386936 (-0.622618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.004925 / 0.011008 (-0.006083) | 0.078995 / 0.038508 (0.040487) | 0.092878 / 0.023109 (0.069769) | 0.485615 / 0.275898 (0.209717) | 0.532157 / 0.323480 (0.208677) | 0.008228 / 0.007986 (0.000243) | 0.004777 / 0.004328 (0.000449) | 0.076892 / 0.004250 (0.072642) | 0.066905 / 0.037052 (0.029853) | 0.465497 / 0.258489 (0.207008) | 0.520153 / 0.293841 (0.226312) | 0.047357 / 0.128546 (-0.081189) | 0.016870 / 0.075646 (-0.058776) | 0.090481 / 0.419271 (-0.328791) | 0.060774 / 0.043533 (0.017241) | 0.474368 / 0.255139 (0.219229) | 0.503981 / 0.283200 (0.220781) | 0.036025 / 0.141683 (-0.105658) | 1.769939 / 1.452155 (0.317784) | 1.851518 / 1.492716 (0.358802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265947 / 0.018006 (0.247941) | 0.532317 / 0.000490 (0.531828) | 0.004997 / 0.000200 (0.004797) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034112 / 0.037411 (-0.003299) | 0.102290 / 0.014526 (0.087764) | 0.109989 / 0.176557 (-0.066567) | 0.182813 / 0.737135 (-0.554323) | 0.111774 / 0.296338 (-0.184565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584893 / 0.215209 (0.369684) | 6.138505 / 2.077655 (4.060850) | 2.925761 / 1.504120 (1.421641) | 2.607320 / 1.541195 (1.066125) | 2.655827 / 1.468490 (1.187337) | 0.871140 / 4.584777 (-3.713637) | 5.051171 / 3.745712 (1.305459) | 4.708008 / 5.269862 (-0.561854) | 3.027485 / 4.565676 (-1.538191) | 0.100970 / 0.424275 (-0.323305) | 0.009640 / 0.007607 (0.002033) | 0.747818 / 0.226044 (0.521774) | 7.539930 / 2.268929 (5.271001) | 3.611693 / 55.444624 (-51.832931) | 2.924087 / 6.876477 (-3.952390) | 3.141993 / 2.142072 (0.999920) | 1.062921 / 4.805227 (-3.742306) | 0.213185 / 6.500664 (-6.287479) | 0.077146 / 0.075469 (0.001677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669182 / 1.841788 (-0.172606) | 23.810242 / 8.074308 (15.735934) | 21.220649 / 10.191392 (11.029257) | 0.212639 / 0.680424 (-0.467785) | 0.026705 / 0.534201 (-0.507496) | 0.469231 / 0.579283 (-0.110053) | 0.551672 / 0.434364 (0.117308) | 0.575043 / 0.540337 (0.034706) | 0.767511 / 1.386936 (-0.619425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n" ]
2023-08-08T15:43:56
2023-08-08T16:08:22
2023-08-08T15:49:06
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6129", "html_url": "https://github.com/huggingface/datasets/pull/6129", "diff_url": "https://github.com/huggingface/datasets/pull/6129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6129.patch", "merged_at": "2023-08-08T15:49:06" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6129/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6128/comments
https://api.github.com/repos/huggingface/datasets/issues/6128/events
https://github.com/huggingface/datasets/issues/6128
1,841,545,493
I_kwDODunzps5tw8EV
6,128
IndexError: Invalid key: 88 is out of bounds for size 0
{ "login": "TomasAndersonFang", "id": 38727343, "node_id": "MDQ6VXNlcjM4NzI3MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomasAndersonFang", "html_url": "https://github.com/TomasAndersonFang", "followers_url": "https://api.github.com/users/TomasAndersonFang/followers", "following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}", "gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions", "organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs", "repos_url": "https://api.github.com/users/TomasAndersonFang/repos", "events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}", "received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile", "> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 324, in _compile\r\n out_code = transform_code_object(code, transform)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py\", line 445, in transform_code_object\r\n transformations(instructions, code_options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 311, in transform\r\n tracer.run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1726, in run\r\n super().run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 576, in run\r\n and self.step()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 540, in step\r\n getattr(self, inst.opname)(inst)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1030, in LOAD_ATTR\r\n result = BuiltinVariable(getattr).call_function(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 566, in call_function\r\n result = handler(tx, *args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 931, in call_getattr\r\n return obj.var_getattr(tx, name).add_options(options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py\", line 124, in var_getattr\r\n subobj = inspect.getattr_static(base, name)\r\n File \"/apps/Arch/software/Python/3.10.8-GCCcore-12.2.0/lib/python3.10/inspect.py\", line 1777, in getattr_static\r\n raise AttributeError(attr)\r\nAttributeError: config\r\n\r\nfrom user code:\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/peft/peft_model.py\", line 909, in forward\r\n if self.base_model.config.model_type == \"mpt\":\r\n\r\nSet torch._dynamo.config.verbose=True for more information\r\n\r\n\r\nYou can suppress this exception and fall back to eager by setting:\r\n torch._dynamo.config.suppress_errors = True\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 228, in <module>\r\n main()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 221, in main\r\n trainer.train()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\n return inner_training_loop(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2654, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2679, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 82, in forward\r\n return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 209, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 581, in forward\r\n return model_forward(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 569, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 337, in catch_errors\r\n return callback(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 404, in _convert_frame\r\n result = inner_convert(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 104, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 262, in _convert_frame_assert\r\n return _compile(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/utils.py\", line 163, in time_wrapper\r\n r = func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 394, in _compile\r\n raise InternalTorchDynamoError() from e\r\ntorch._dynamo.exc.InternalTorchDynamoError\r\n```", "Hi @TomasAndersonFang,\r\n\r\nI guess in this case it may be an issue with `transformers` (or `PyTorch`). I would recommend you open an issue on their repo." ]
2023-08-08T15:32:08
2023-08-10T09:31:12
null
NONE
null
null
null
### Describe the bug This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib. ### Steps to reproduce the bug I use the following code to fine-tune Falcon on my private dataset. ```python import transformers from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoConfig, DataCollatorForSeq2Seq, Trainer, Seq2SeqTrainer, HfArgumentParser, Seq2SeqTrainingArguments, BitsAndBytesConfig, ) from peft import ( LoraConfig, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training, set_peft_model_state_dict, ) import torch import os import evaluate import functools from datasets import load_dataset import bitsandbytes as bnb import logging import json import copy from typing import Dict, Optional, Sequence from dataclasses import dataclass, field # Lora settings LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT= 0.05 LORA_TARGET_MODULES = ["query_key_value"] @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B") @dataclass class DataArguments: data_path: str = field(default=None, metadata={"help": "Path to the training data."}) train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."}) num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."}) @dataclass class TrainingArguments(transformers.TrainingArguments): # cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, ) is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."}) def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True): result = tokenizer( text, truncation=True, max_length=max_seq_len, padding=False, return_tensors=None, ) if ( result["input_ids"][-1] != tokenizer.eos_token_id and len(result["input_ids"]) < max_seq_len and add_eos_token ): result["input_ids"].append(tokenizer.eos_token_id) result["attention_mask"].append(1) if add_eos_token and len(result["input_ids"]) >= max_seq_len: result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id result["attention_mask"][max_seq_len - 1] = 1 result["labels"] = result["input_ids"].copy() return result def main(): parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() config = AutoConfig.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, trust_remote_code=True, ) if training_args.is_lora: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, torch_dtype=torch.float16, trust_remote_code=True, load_in_8bit=True, quantization_config=BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0 ), ) model = prepare_model_for_int8_training(model) config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=LORA_TARGET_MODULES, lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) else: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, torch_dtype=torch.float16, cache_dir=data_args.cache_path, trust_remote_code=True, ) model.config.use_cache = False def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) print_trainable_parameters(model) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, model_max_length=training_args.model_max_length, padding_side="left", use_fast=True, trust_remote_code=True, ) tokenizer.pad_token = tokenizer.eos_token # Load dataset def generate_and_tokenize_prompt(sample): input_text = sample["input"] target_text = sample["output"] + tokenizer.eos_token full_text = input_text + target_text tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512) tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512) input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:] return tokenized_full_text data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.eval_file is not None: data_files["eval"] = data_args.eval_file dataset = load_dataset(data_args.data_path, data_files=data_files) train_dataset = dataset["train"] eval_dataset = dataset["eval"] train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True) # Evaluation metrics def compute_metrics(eval_preds, tokenizer): metric = evaluate.load('exact_match') preds, labels = eval_preds # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Replace -100s in the labels as we can't decode them labels[labels == -100] = tokenizer.pad_token_id decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Some simple post-processing decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [label.strip() for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {'exact_match': result['exact_match']} compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer) model = torch.compile(model) # Training trainer = Trainer( model=model, train_dataset=train_dataset, eval_dataset=eval_dataset, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics_fn, ) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) tokenizer.save_pretrained(save_directory=training_args.output_dir) if __name__ == "__main__": main() ``` When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error: ``` Traceback (most recent call last): File "falcon_sft.py", line 230, in <module> main() File "falcon_sft.py", line 223, in main trainer.train() File "python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__ current_batch = next(dataloader_iter) File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = self.dataset.__getitems__(possibly_batched_index) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__ batch = self.__getitem__(keys) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table _check_valid_index_key(key, size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key _check_valid_index_key(int(max(key)), size=size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 88 is out of bounds for size 0 ``` So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`? ### Expected behavior I want to use `torch.compile` in my code. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6128/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6127/comments
https://api.github.com/repos/huggingface/datasets/issues/6127/events
https://github.com/huggingface/datasets/pull/6127
1,839,746,721
PR_kwDODunzps5XWdP5
6,127
Fix authentication issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006103 / 0.011353 (-0.005250) | 0.003588 / 0.011008 (-0.007420) | 0.080335 / 0.038508 (0.041827) | 0.059634 / 0.023109 (0.036525) | 0.356093 / 0.275898 (0.080195) | 0.407376 / 0.323480 (0.083896) | 0.005343 / 0.007986 (-0.002643) | 0.002928 / 0.004328 (-0.001400) | 0.062580 / 0.004250 (0.058330) | 0.047544 / 0.037052 (0.010491) | 0.364305 / 0.258489 (0.105816) | 0.421463 / 0.293841 (0.127623) | 0.027249 / 0.128546 (-0.101298) | 0.008010 / 0.075646 (-0.067636) | 0.262543 / 0.419271 (-0.156728) | 0.044978 / 0.043533 (0.001445) | 0.339344 / 0.255139 (0.084205) | 0.395288 / 0.283200 (0.112088) | 0.021425 / 0.141683 (-0.120258) | 1.439767 / 1.452155 (-0.012387) | 1.498081 / 1.492716 (0.005365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196976 / 0.018006 (0.178970) | 0.435383 / 0.000490 (0.434893) | 0.004559 / 0.000200 (0.004359) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023653 / 0.037411 (-0.013759) | 0.072944 / 0.014526 (0.058418) | 0.083651 / 0.176557 (-0.092906) | 0.144590 / 0.737135 (-0.592545) | 0.084844 / 0.296338 (-0.211494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398752 / 0.215209 (0.183543) | 3.959539 / 2.077655 (1.881884) | 1.935277 / 1.504120 (0.431157) | 1.751994 / 1.541195 (0.210799) | 1.828386 / 1.468490 (0.359896) | 0.500492 / 4.584777 (-4.084284) | 3.086630 / 3.745712 (-0.659082) | 2.851664 / 5.269862 (-2.418198) | 1.869792 / 4.565676 (-2.695885) | 0.058509 / 0.424275 (-0.365766) | 0.006500 / 0.007607 (-0.001107) | 0.467468 / 0.226044 (0.241424) | 4.686168 / 2.268929 (2.417240) | 2.427632 / 55.444624 (-53.016993) | 2.193194 / 6.876477 (-4.683283) | 2.408574 / 2.142072 (0.266501) | 0.592173 / 4.805227 (-4.213054) | 0.125381 / 6.500664 (-6.375283) | 0.060679 / 0.075469 (-0.014790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236066 / 1.841788 (-0.605722) | 18.591689 / 8.074308 (10.517381) | 14.138774 / 10.191392 (3.947382) | 0.147455 / 0.680424 (-0.532968) | 0.016921 / 0.534201 (-0.517280) | 0.328129 / 0.579283 (-0.251154) | 0.348872 / 0.434364 (-0.085491) | 0.380311 / 0.540337 (-0.160026) | 0.532901 / 1.386936 (-0.854035) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005438) | 0.003614 / 0.011008 (-0.007394) | 0.062857 / 0.038508 (0.024349) | 0.060633 / 0.023109 (0.037524) | 0.419684 / 0.275898 (0.143786) | 0.449025 / 0.323480 (0.125546) | 0.004595 / 0.007986 (-0.003391) | 0.002861 / 0.004328 (-0.001467) | 0.063253 / 0.004250 (0.059003) | 0.048770 / 0.037052 (0.011718) | 0.419838 / 0.258489 (0.161349) | 0.465183 / 0.293841 (0.171342) | 0.027350 / 0.128546 (-0.101196) | 0.008065 / 0.075646 (-0.067582) | 0.068321 / 0.419271 (-0.350950) | 0.041083 / 0.043533 (-0.002449) | 0.400831 / 0.255139 (0.145692) | 0.449286 / 0.283200 (0.166086) | 0.020472 / 0.141683 (-0.121210) | 1.437215 / 1.452155 (-0.014940) | 1.503679 / 1.492716 (0.010963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230764 / 0.018006 (0.212758) | 0.420774 / 0.000490 (0.420285) | 0.004012 / 0.000200 (0.003812) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026009 / 0.037411 (-0.011402) | 0.077943 / 0.014526 (0.063417) | 0.087281 / 0.176557 (-0.089276) | 0.139422 / 0.737135 (-0.597713) | 0.089090 / 0.296338 (-0.207248) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417298 / 0.215209 (0.202088) | 4.152303 / 2.077655 (2.074648) | 2.179996 / 1.504120 (0.675877) | 2.020619 / 1.541195 (0.479424) | 2.085241 / 1.468490 (0.616751) | 0.501111 / 4.584777 (-4.083666) | 3.079849 / 3.745712 (-0.665863) | 2.820607 / 5.269862 (-2.449255) | 1.863988 / 4.565676 (-2.701688) | 0.057662 / 0.424275 (-0.366613) | 0.006778 / 0.007607 (-0.000830) | 0.498661 / 0.226044 (0.272616) | 4.986503 / 2.268929 (2.717574) | 2.620676 / 55.444624 (-52.823949) | 2.297546 / 6.876477 (-4.578931) | 2.458148 / 2.142072 (0.316075) | 0.599490 / 4.805227 (-4.205738) | 0.125102 / 6.500664 (-6.375562) | 0.061411 / 0.075469 (-0.014059) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323816 / 1.841788 (-0.517971) | 18.462614 / 8.074308 (10.388306) | 13.845826 / 10.191392 (3.654434) | 0.146115 / 0.680424 (-0.534309) | 0.016862 / 0.534201 (-0.517339) | 0.335449 / 0.579283 (-0.243834) | 0.343792 / 0.434364 (-0.090572) | 0.394068 / 0.540337 (-0.146269) | 0.536378 / 1.386936 (-0.850558) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de3f00368c9236e9410821f5fddb95d6069883c1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006825 / 0.011353 (-0.004527) | 0.004005 / 0.011008 (-0.007003) | 0.085504 / 0.038508 (0.046996) | 0.077252 / 0.023109 (0.054143) | 0.351891 / 0.275898 (0.075993) | 0.383404 / 0.323480 (0.059924) | 0.004153 / 0.007986 (-0.003833) | 0.003344 / 0.004328 (-0.000985) | 0.064936 / 0.004250 (0.060685) | 0.057653 / 0.037052 (0.020601) | 0.368155 / 0.258489 (0.109666) | 0.406122 / 0.293841 (0.112282) | 0.032049 / 0.128546 (-0.096497) | 0.008698 / 0.075646 (-0.066949) | 0.292394 / 0.419271 (-0.126878) | 0.053634 / 0.043533 (0.010101) | 0.358273 / 0.255139 (0.103134) | 0.378441 / 0.283200 (0.095242) | 0.026928 / 0.141683 (-0.114755) | 1.458718 / 1.452155 (0.006563) | 1.536231 / 1.492716 (0.043515) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213956 / 0.018006 (0.195950) | 0.458620 / 0.000490 (0.458130) | 0.002718 / 0.000200 (0.002519) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083922 / 0.014526 (0.069396) | 0.152056 / 0.176557 (-0.024501) | 0.151584 / 0.737135 (-0.585552) | 0.095698 / 0.296338 (-0.200641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407762 / 0.215209 (0.192553) | 4.074324 / 2.077655 (1.996669) | 2.089929 / 1.504120 (0.585809) | 1.920024 / 1.541195 (0.378829) | 2.013410 / 1.468490 (0.544920) | 0.486056 / 4.584777 (-4.098721) | 3.656869 / 3.745712 (-0.088843) | 3.304008 / 5.269862 (-1.965854) | 2.074363 / 4.565676 (-2.491313) | 0.057293 / 0.424275 (-0.366982) | 0.007240 / 0.007607 (-0.000367) | 0.482696 / 0.226044 (0.256652) | 4.833251 / 2.268929 (2.564322) | 2.570391 / 55.444624 (-52.874233) | 2.220619 / 6.876477 (-4.655857) | 2.426316 / 2.142072 (0.284243) | 0.584811 / 4.805227 (-4.220416) | 0.134907 / 6.500664 (-6.365757) | 0.061115 / 0.075469 (-0.014354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251969 / 1.841788 (-0.589818) | 19.601611 / 8.074308 (11.527303) | 14.190217 / 10.191392 (3.998825) | 0.166296 / 0.680424 (-0.514128) | 0.018334 / 0.534201 (-0.515867) | 0.395172 / 0.579283 (-0.184111) | 0.410440 / 0.434364 (-0.023924) | 0.462263 / 0.540337 (-0.078074) | 0.645504 / 1.386936 (-0.741432) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004084 / 0.011008 (-0.006924) | 0.065208 / 0.038508 (0.026700) | 0.077809 / 0.023109 (0.054699) | 0.386472 / 0.275898 (0.110574) | 0.418686 / 0.323480 (0.095206) | 0.005346 / 0.007986 (-0.002640) | 0.003416 / 0.004328 (-0.000912) | 0.066209 / 0.004250 (0.061958) | 0.057517 / 0.037052 (0.020465) | 0.407684 / 0.258489 (0.149195) | 0.425438 / 0.293841 (0.131597) | 0.032166 / 0.128546 (-0.096380) | 0.008662 / 0.075646 (-0.066985) | 0.071712 / 0.419271 (-0.347560) | 0.049764 / 0.043533 (0.006231) | 0.394882 / 0.255139 (0.139743) | 0.403589 / 0.283200 (0.120389) | 0.023688 / 0.141683 (-0.117995) | 1.468488 / 1.452155 (0.016334) | 1.533118 / 1.492716 (0.040401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252949 / 0.018006 (0.234943) | 0.447355 / 0.000490 (0.446865) | 0.011721 / 0.000200 (0.011521) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031444 / 0.037411 (-0.005968) | 0.089390 / 0.014526 (0.074864) | 0.100103 / 0.176557 (-0.076454) | 0.153301 / 0.737135 (-0.583835) | 0.101336 / 0.296338 (-0.195003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408574 / 0.215209 (0.193365) | 4.073135 / 2.077655 (1.995480) | 2.086550 / 1.504120 (0.582430) | 1.930651 / 1.541195 (0.389457) | 2.013548 / 1.468490 (0.545058) | 0.477235 / 4.584777 (-4.107542) | 3.547545 / 3.745712 (-0.198167) | 3.321957 / 5.269862 (-1.947905) | 2.057705 / 4.565676 (-2.507971) | 0.056730 / 0.424275 (-0.367545) | 0.007882 / 0.007607 (0.000275) | 0.487297 / 0.226044 (0.261253) | 4.874184 / 2.268929 (2.605255) | 2.631129 / 55.444624 (-52.813496) | 2.235755 / 6.876477 (-4.640722) | 2.463329 / 2.142072 (0.321257) | 0.578308 / 4.805227 (-4.226919) | 0.132726 / 6.500664 (-6.367938) | 0.064883 / 0.075469 (-0.010586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347564 / 1.841788 (-0.494223) | 20.192973 / 8.074308 (12.118665) | 14.563553 / 10.191392 (4.372161) | 0.168244 / 0.680424 (-0.512180) | 0.018638 / 0.534201 (-0.515563) | 0.394789 / 0.579283 (-0.184494) | 0.419677 / 0.434364 (-0.014687) | 0.480274 / 0.540337 (-0.060063) | 0.641204 / 1.386936 (-0.745732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c7a0d56b60bf700d6a491fa30eaf66500969315 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005939 / 0.011353 (-0.005413) | 0.003457 / 0.011008 (-0.007551) | 0.079985 / 0.038508 (0.041477) | 0.056492 / 0.023109 (0.033383) | 0.312356 / 0.275898 (0.036458) | 0.354038 / 0.323480 (0.030558) | 0.004551 / 0.007986 (-0.003435) | 0.002828 / 0.004328 (-0.001501) | 0.062369 / 0.004250 (0.058119) | 0.044712 / 0.037052 (0.007660) | 0.318244 / 0.258489 (0.059755) | 0.361977 / 0.293841 (0.068136) | 0.026460 / 0.128546 (-0.102086) | 0.007928 / 0.075646 (-0.067719) | 0.261378 / 0.419271 (-0.157894) | 0.044209 / 0.043533 (0.000676) | 0.313931 / 0.255139 (0.058792) | 0.339553 / 0.283200 (0.056354) | 0.019776 / 0.141683 (-0.121907) | 1.443126 / 1.452155 (-0.009029) | 1.508149 / 1.492716 (0.015432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183801 / 0.018006 (0.165795) | 0.427967 / 0.000490 (0.427477) | 0.002028 / 0.000200 (0.001828) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023697 / 0.037411 (-0.013715) | 0.072128 / 0.014526 (0.057602) | 0.083701 / 0.176557 (-0.092855) | 0.142821 / 0.737135 (-0.594315) | 0.082276 / 0.296338 (-0.214063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434427 / 0.215209 (0.219218) | 4.325962 / 2.077655 (2.248308) | 2.277115 / 1.504120 (0.772995) | 2.093736 / 1.541195 (0.552541) | 2.127984 / 1.468490 (0.659494) | 0.502336 / 4.584777 (-4.082441) | 3.023243 / 3.745712 (-0.722469) | 2.805154 / 5.269862 (-2.464708) | 1.821273 / 4.565676 (-2.744403) | 0.057480 / 0.424275 (-0.366795) | 0.006365 / 0.007607 (-0.001242) | 0.508258 / 0.226044 (0.282213) | 5.087950 / 2.268929 (2.819022) | 2.705029 / 55.444624 (-52.739596) | 2.378392 / 6.876477 (-4.498085) | 2.515380 / 2.142072 (0.373307) | 0.589283 / 4.805227 (-4.215944) | 0.125719 / 6.500664 (-6.374945) | 0.061074 / 0.075469 (-0.014395) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221895 / 1.841788 (-0.619893) | 18.025917 / 8.074308 (9.951609) | 13.556901 / 10.191392 (3.365509) | 0.142614 / 0.680424 (-0.537809) | 0.016731 / 0.534201 (-0.517469) | 0.328374 / 0.579283 (-0.250910) | 0.342553 / 0.434364 (-0.091811) | 0.374502 / 0.540337 (-0.165836) | 0.534173 / 1.386936 (-0.852763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005817 / 0.011353 (-0.005536) | 0.003500 / 0.011008 (-0.007509) | 0.062240 / 0.038508 (0.023732) | 0.058128 / 0.023109 (0.035019) | 0.424014 / 0.275898 (0.148116) | 0.468453 / 0.323480 (0.144973) | 0.004641 / 0.007986 (-0.003345) | 0.002821 / 0.004328 (-0.001508) | 0.062180 / 0.004250 (0.057930) | 0.047578 / 0.037052 (0.010526) | 0.427367 / 0.258489 (0.168878) | 0.467889 / 0.293841 (0.174048) | 0.027144 / 0.128546 (-0.101403) | 0.007969 / 0.075646 (-0.067678) | 0.067764 / 0.419271 (-0.351508) | 0.040719 / 0.043533 (-0.002814) | 0.423663 / 0.255139 (0.168524) | 0.458556 / 0.283200 (0.175356) | 0.019196 / 0.141683 (-0.122487) | 1.471546 / 1.452155 (0.019392) | 1.547541 / 1.492716 (0.054825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228777 / 0.018006 (0.210770) | 0.406663 / 0.000490 (0.406173) | 0.003688 / 0.000200 (0.003488) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025494 / 0.037411 (-0.011917) | 0.076339 / 0.014526 (0.061814) | 0.084233 / 0.176557 (-0.092324) | 0.136995 / 0.737135 (-0.600140) | 0.085443 / 0.296338 (-0.210895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420441 / 0.215209 (0.205232) | 4.187018 / 2.077655 (2.109363) | 2.142139 / 1.504120 (0.638019) | 1.974530 / 1.541195 (0.433335) | 2.027321 / 1.468490 (0.558831) | 0.498116 / 4.584777 (-4.086661) | 2.988514 / 3.745712 (-0.757198) | 2.782046 / 5.269862 (-2.487816) | 1.821725 / 4.565676 (-2.743951) | 0.057711 / 0.424275 (-0.366564) | 0.006664 / 0.007607 (-0.000944) | 0.491015 / 0.226044 (0.264971) | 4.921037 / 2.268929 (2.652108) | 2.574964 / 55.444624 (-52.869661) | 2.251703 / 6.876477 (-4.624774) | 2.361154 / 2.142072 (0.219082) | 0.593362 / 4.805227 (-4.211865) | 0.126107 / 6.500664 (-6.374557) | 0.061840 / 0.075469 (-0.013630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327459 / 1.841788 (-0.514328) | 18.062960 / 8.074308 (9.988652) | 13.669253 / 10.191392 (3.477861) | 0.130719 / 0.680424 (-0.549705) | 0.016564 / 0.534201 (-0.517637) | 0.335821 / 0.579283 (-0.243462) | 0.341691 / 0.434364 (-0.092673) | 0.392651 / 0.540337 (-0.147686) | 0.529650 / 1.386936 (-0.857286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c65806b0542996e56825ab46a3ce8f9c07ab0df3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009625 / 0.011353 (-0.001728) | 0.005354 / 0.011008 (-0.005654) | 0.114350 / 0.038508 (0.075842) | 0.086637 / 0.023109 (0.063528) | 0.465381 / 0.275898 (0.189483) | 0.490411 / 0.323480 (0.166931) | 0.006575 / 0.007986 (-0.001411) | 0.004287 / 0.004328 (-0.000041) | 0.093134 / 0.004250 (0.088884) | 0.060209 / 0.037052 (0.023156) | 0.459570 / 0.258489 (0.201080) | 0.523320 / 0.293841 (0.229479) | 0.047943 / 0.128546 (-0.080603) | 0.014764 / 0.075646 (-0.060882) | 0.383887 / 0.419271 (-0.035384) | 0.069864 / 0.043533 (0.026331) | 0.469122 / 0.255139 (0.213983) | 0.509953 / 0.283200 (0.226753) | 0.037800 / 0.141683 (-0.103883) | 1.877589 / 1.452155 (0.425434) | 2.014913 / 1.492716 (0.522197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.309146 / 0.018006 (0.291140) | 0.644390 / 0.000490 (0.643900) | 0.005017 / 0.000200 (0.004817) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032964 / 0.037411 (-0.004447) | 0.103236 / 0.014526 (0.088711) | 0.119950 / 0.176557 (-0.056607) | 0.207674 / 0.737135 (-0.529461) | 0.117278 / 0.296338 (-0.179060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.605464 / 0.215209 (0.390255) | 6.027805 / 2.077655 (3.950150) | 2.719725 / 1.504120 (1.215605) | 2.262752 / 1.541195 (0.721558) | 2.330310 / 1.468490 (0.861820) | 0.862537 / 4.584777 (-3.722240) | 5.347080 / 3.745712 (1.601368) | 4.792170 / 5.269862 (-0.477691) | 3.103694 / 4.565676 (-1.461983) | 0.103646 / 0.424275 (-0.320629) | 0.009411 / 0.007607 (0.001804) | 0.743052 / 0.226044 (0.517008) | 7.289684 / 2.268929 (5.020755) | 3.436530 / 55.444624 (-52.008094) | 2.722440 / 6.876477 (-4.154036) | 2.952380 / 2.142072 (0.810308) | 1.047688 / 4.805227 (-3.757539) | 0.212724 / 6.500664 (-6.287940) | 0.081473 / 0.075469 (0.006004) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.714437 / 1.841788 (-0.127351) | 24.384330 / 8.074308 (16.310022) | 22.444162 / 10.191392 (12.252770) | 0.226264 / 0.680424 (-0.454160) | 0.030530 / 0.534201 (-0.503671) | 0.473999 / 0.579283 (-0.105284) | 0.575005 / 0.434364 (0.140641) | 0.542789 / 0.540337 (0.002451) | 0.776079 / 1.386936 (-0.610857) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009366 / 0.011353 (-0.001987) | 0.005239 / 0.011008 (-0.005769) | 0.085116 / 0.038508 (0.046608) | 0.089600 / 0.023109 (0.066491) | 0.485778 / 0.275898 (0.209880) | 0.540054 / 0.323480 (0.216574) | 0.006290 / 0.007986 (-0.001695) | 0.004054 / 0.004328 (-0.000274) | 0.083535 / 0.004250 (0.079284) | 0.067200 / 0.037052 (0.030148) | 0.519520 / 0.258489 (0.261031) | 0.544049 / 0.293841 (0.250208) | 0.054300 / 0.128546 (-0.074246) | 0.013650 / 0.075646 (-0.061996) | 0.102515 / 0.419271 (-0.316757) | 0.063054 / 0.043533 (0.019522) | 0.491724 / 0.255139 (0.236585) | 0.547498 / 0.283200 (0.264298) | 0.039266 / 0.141683 (-0.102416) | 1.801226 / 1.452155 (0.349071) | 1.861778 / 1.492716 (0.369061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313009 / 0.018006 (0.295003) | 0.587695 / 0.000490 (0.587205) | 0.004972 / 0.000200 (0.004772) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.091154 / 0.014526 (0.076628) | 0.110505 / 0.176557 (-0.066052) | 0.164204 / 0.737135 (-0.572932) | 0.107812 / 0.296338 (-0.188526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.610535 / 0.215209 (0.395326) | 6.162517 / 2.077655 (4.084862) | 2.866718 / 1.504120 (1.362598) | 2.542412 / 1.541195 (1.001218) | 2.584136 / 1.468490 (1.115645) | 0.874319 / 4.584777 (-3.710458) | 5.257184 / 3.745712 (1.511472) | 4.705840 / 5.269862 (-0.564022) | 2.971708 / 4.565676 (-1.593969) | 0.099026 / 0.424275 (-0.325249) | 0.009142 / 0.007607 (0.001535) | 0.728660 / 0.226044 (0.502615) | 7.560922 / 2.268929 (5.291994) | 3.439521 / 55.444624 (-52.005103) | 2.854730 / 6.876477 (-4.021746) | 3.088951 / 2.142072 (0.946879) | 0.973621 / 4.805227 (-3.831606) | 0.209792 / 6.500664 (-6.290872) | 0.081107 / 0.075469 (0.005638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716809 / 1.841788 (-0.124978) | 24.386927 / 8.074308 (16.312619) | 20.715524 / 10.191392 (10.524131) | 0.260831 / 0.680424 (-0.419592) | 0.030701 / 0.534201 (-0.503500) | 0.490018 / 0.579283 (-0.089265) | 0.590424 / 0.434364 (0.156060) | 0.589942 / 0.540337 (0.049604) | 0.798094 / 1.386936 (-0.588842) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0a77dc943de68a17f23f141517028c734c78623 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006592 / 0.011353 (-0.004761) | 0.003880 / 0.011008 (-0.007128) | 0.083761 / 0.038508 (0.045253) | 0.075966 / 0.023109 (0.052857) | 0.315291 / 0.275898 (0.039393) | 0.355920 / 0.323480 (0.032440) | 0.004972 / 0.007986 (-0.003014) | 0.003053 / 0.004328 (-0.001275) | 0.063553 / 0.004250 (0.059302) | 0.050794 / 0.037052 (0.013742) | 0.317681 / 0.258489 (0.059192) | 0.361991 / 0.293841 (0.068150) | 0.028119 / 0.128546 (-0.100427) | 0.008203 / 0.075646 (-0.067443) | 0.271756 / 0.419271 (-0.147516) | 0.046701 / 0.043533 (0.003168) | 0.316520 / 0.255139 (0.061381) | 0.350499 / 0.283200 (0.067300) | 0.022399 / 0.141683 (-0.119284) | 1.416017 / 1.452155 (-0.036138) | 1.503087 / 1.492716 (0.010371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208250 / 0.018006 (0.190244) | 0.470345 / 0.000490 (0.469856) | 0.003687 / 0.000200 (0.003487) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026163 / 0.037411 (-0.011248) | 0.083315 / 0.014526 (0.068789) | 0.088541 / 0.176557 (-0.088015) | 0.150078 / 0.737135 (-0.587057) | 0.088862 / 0.296338 (-0.207476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404911 / 0.215209 (0.189702) | 4.059257 / 2.077655 (1.981602) | 1.890987 / 1.504120 (0.386867) | 1.726608 / 1.541195 (0.185413) | 1.767479 / 1.468490 (0.298989) | 0.518826 / 4.584777 (-4.065951) | 3.212145 / 3.745712 (-0.533567) | 3.029933 / 5.269862 (-2.239929) | 2.000203 / 4.565676 (-2.565474) | 0.059631 / 0.424275 (-0.364644) | 0.006707 / 0.007607 (-0.000900) | 0.485741 / 0.226044 (0.259697) | 4.871938 / 2.268929 (2.603010) | 2.418856 / 55.444624 (-53.025769) | 2.084847 / 6.876477 (-4.791630) | 2.207992 / 2.142072 (0.065920) | 0.614354 / 4.805227 (-4.190873) | 0.128932 / 6.500664 (-6.371732) | 0.062342 / 0.075469 (-0.013127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325792 / 1.841788 (-0.515995) | 19.718995 / 8.074308 (11.644687) | 15.278535 / 10.191392 (5.087143) | 0.146719 / 0.680424 (-0.533705) | 0.017718 / 0.534201 (-0.516483) | 0.335709 / 0.579283 (-0.243574) | 0.378060 / 0.434364 (-0.056304) | 0.391135 / 0.540337 (-0.149202) | 0.548045 / 1.386936 (-0.838891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.003742 / 0.011008 (-0.007266) | 0.064405 / 0.038508 (0.025897) | 0.077618 / 0.023109 (0.054509) | 0.365325 / 0.275898 (0.089427) | 0.408109 / 0.323480 (0.084629) | 0.004909 / 0.007986 (-0.003076) | 0.002972 / 0.004328 (-0.001356) | 0.063933 / 0.004250 (0.059682) | 0.052916 / 0.037052 (0.015863) | 0.370891 / 0.258489 (0.112402) | 0.412134 / 0.293841 (0.118293) | 0.028171 / 0.128546 (-0.100375) | 0.008150 / 0.075646 (-0.067497) | 0.069248 / 0.419271 (-0.350024) | 0.042353 / 0.043533 (-0.001180) | 0.368117 / 0.255139 (0.112978) | 0.397548 / 0.283200 (0.114348) | 0.022967 / 0.141683 (-0.118716) | 1.472740 / 1.452155 (0.020586) | 1.524028 / 1.492716 (0.031311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256854 / 0.018006 (0.238848) | 0.471499 / 0.000490 (0.471009) | 0.009609 / 0.000200 (0.009409) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009433) | 0.086741 / 0.014526 (0.072215) | 0.091189 / 0.176557 (-0.085368) | 0.146117 / 0.737135 (-0.591018) | 0.092358 / 0.296338 (-0.203980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426356 / 0.215209 (0.211147) | 4.263782 / 2.077655 (2.186127) | 2.178198 / 1.504120 (0.674078) | 2.015405 / 1.541195 (0.474211) | 2.055966 / 1.468490 (0.587476) | 0.507531 / 4.584777 (-4.077246) | 3.175967 / 3.745712 (-0.569745) | 3.055697 / 5.269862 (-2.214165) | 1.987663 / 4.565676 (-2.578014) | 0.058452 / 0.424275 (-0.365823) | 0.006944 / 0.007607 (-0.000663) | 0.502534 / 0.226044 (0.276489) | 5.024693 / 2.268929 (2.755765) | 2.754971 / 55.444624 (-52.689653) | 2.470845 / 6.876477 (-4.405632) | 2.698675 / 2.142072 (0.556602) | 0.602357 / 4.805227 (-4.202871) | 0.129490 / 6.500664 (-6.371174) | 0.065127 / 0.075469 (-0.010342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398487 / 1.841788 (-0.443301) | 19.692279 / 8.074308 (11.617971) | 15.124064 / 10.191392 (4.932672) | 0.148938 / 0.680424 (-0.531486) | 0.017418 / 0.534201 (-0.516783) | 0.340480 / 0.579283 (-0.238803) | 0.377223 / 0.434364 (-0.057141) | 0.405303 / 0.540337 (-0.135034) | 0.548923 / 1.386936 (-0.838013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58e62af004b6b8b84dcfd897a4bc71637cfa6c3f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006433 / 0.011353 (-0.004920) | 0.004002 / 0.011008 (-0.007006) | 0.084130 / 0.038508 (0.045622) | 0.070628 / 0.023109 (0.047519) | 0.312372 / 0.275898 (0.036474) | 0.343993 / 0.323480 (0.020513) | 0.003936 / 0.007986 (-0.004050) | 0.003336 / 0.004328 (-0.000993) | 0.064715 / 0.004250 (0.060465) | 0.052511 / 0.037052 (0.015458) | 0.314092 / 0.258489 (0.055603) | 0.363152 / 0.293841 (0.069311) | 0.030898 / 0.128546 (-0.097648) | 0.008396 / 0.075646 (-0.067250) | 0.288083 / 0.419271 (-0.131188) | 0.051654 / 0.043533 (0.008122) | 0.315252 / 0.255139 (0.060113) | 0.346756 / 0.283200 (0.063556) | 0.025167 / 0.141683 (-0.116515) | 1.487265 / 1.452155 (0.035110) | 1.557528 / 1.492716 (0.064812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206517 / 0.018006 (0.188510) | 0.458359 / 0.000490 (0.457869) | 0.003719 / 0.000200 (0.003519) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029631 / 0.037411 (-0.007780) | 0.083856 / 0.014526 (0.069330) | 0.340431 / 0.176557 (0.163875) | 0.153864 / 0.737135 (-0.583271) | 0.095951 / 0.296338 (-0.200388) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379182 / 0.215209 (0.163973) | 3.783396 / 2.077655 (1.705741) | 1.835932 / 1.504120 (0.331813) | 1.667563 / 1.541195 (0.126369) | 1.739309 / 1.468490 (0.270818) | 0.478957 / 4.584777 (-4.105820) | 3.521974 / 3.745712 (-0.223738) | 3.237635 / 5.269862 (-2.032227) | 2.000300 / 4.565676 (-2.565377) | 0.056389 / 0.424275 (-0.367887) | 0.007242 / 0.007607 (-0.000365) | 0.452642 / 0.226044 (0.226598) | 4.524339 / 2.268929 (2.255411) | 2.346210 / 55.444624 (-53.098414) | 1.957196 / 6.876477 (-4.919281) | 2.180051 / 2.142072 (0.037979) | 0.570205 / 4.805227 (-4.235022) | 0.131346 / 6.500664 (-6.369318) | 0.059327 / 0.075469 (-0.016142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244709 / 1.841788 (-0.597079) | 19.566277 / 8.074308 (11.491969) | 14.172598 / 10.191392 (3.981206) | 0.166493 / 0.680424 (-0.513931) | 0.018281 / 0.534201 (-0.515920) | 0.391608 / 0.579283 (-0.187675) | 0.402642 / 0.434364 (-0.031722) | 0.464974 / 0.540337 (-0.075364) | 0.637565 / 1.386936 (-0.749371) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006929 / 0.011353 (-0.004424) | 0.004114 / 0.011008 (-0.006894) | 0.064589 / 0.038508 (0.026081) | 0.083334 / 0.023109 (0.060225) | 0.391280 / 0.275898 (0.115382) | 0.426157 / 0.323480 (0.102678) | 0.005336 / 0.007986 (-0.002650) | 0.003395 / 0.004328 (-0.000934) | 0.064560 / 0.004250 (0.060310) | 0.057094 / 0.037052 (0.020042) | 0.398959 / 0.258489 (0.140470) | 0.432470 / 0.293841 (0.138629) | 0.031412 / 0.128546 (-0.097134) | 0.008670 / 0.075646 (-0.066976) | 0.071249 / 0.419271 (-0.348022) | 0.048934 / 0.043533 (0.005401) | 0.384207 / 0.255139 (0.129068) | 0.407992 / 0.283200 (0.124792) | 0.024492 / 0.141683 (-0.117191) | 1.467788 / 1.452155 (0.015634) | 1.541011 / 1.492716 (0.048295) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279607 / 0.018006 (0.261600) | 0.448899 / 0.000490 (0.448410) | 0.020990 / 0.000200 (0.020790) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030313 / 0.037411 (-0.007099) | 0.089209 / 0.014526 (0.074684) | 0.101024 / 0.176557 (-0.075532) | 0.153468 / 0.737135 (-0.583667) | 0.103219 / 0.296338 (-0.193120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429176 / 0.215209 (0.213967) | 4.302234 / 2.077655 (2.224580) | 2.291103 / 1.504120 (0.786983) | 2.126257 / 1.541195 (0.585062) | 2.207090 / 1.468490 (0.738600) | 0.484643 / 4.584777 (-4.100134) | 3.557429 / 3.745712 (-0.188283) | 3.253804 / 5.269862 (-2.016058) | 2.026087 / 4.565676 (-2.539589) | 0.057793 / 0.424275 (-0.366482) | 0.007761 / 0.007607 (0.000154) | 0.504819 / 0.226044 (0.278775) | 5.046868 / 2.268929 (2.777940) | 2.773149 / 55.444624 (-52.671475) | 2.398036 / 6.876477 (-4.478440) | 2.608094 / 2.142072 (0.466021) | 0.630499 / 4.805227 (-4.174729) | 0.135496 / 6.500664 (-6.365168) | 0.061329 / 0.075469 (-0.014140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327124 / 1.841788 (-0.514664) | 19.889796 / 8.074308 (11.815488) | 14.196100 / 10.191392 (4.004708) | 0.161963 / 0.680424 (-0.518461) | 0.018529 / 0.534201 (-0.515672) | 0.392325 / 0.579283 (-0.186958) | 0.404836 / 0.434364 (-0.029528) | 0.475898 / 0.540337 (-0.064439) | 0.633563 / 1.386936 (-0.753373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4684fc1032321abf0d494b0c130ea7c82ebda80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006390 / 0.011353 (-0.004963) | 0.003683 / 0.011008 (-0.007325) | 0.081274 / 0.038508 (0.042766) | 0.062193 / 0.023109 (0.039083) | 0.355360 / 0.275898 (0.079462) | 0.396471 / 0.323480 (0.072992) | 0.003569 / 0.007986 (-0.004416) | 0.003928 / 0.004328 (-0.000400) | 0.062292 / 0.004250 (0.058041) | 0.049700 / 0.037052 (0.012648) | 0.354604 / 0.258489 (0.096115) | 0.419436 / 0.293841 (0.125595) | 0.027151 / 0.128546 (-0.101395) | 0.007954 / 0.075646 (-0.067692) | 0.262231 / 0.419271 (-0.157041) | 0.045483 / 0.043533 (0.001950) | 0.354285 / 0.255139 (0.099146) | 0.385178 / 0.283200 (0.101978) | 0.021183 / 0.141683 (-0.120500) | 1.420785 / 1.452155 (-0.031370) | 1.531545 / 1.492716 (0.038829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202298 / 0.018006 (0.184292) | 0.442172 / 0.000490 (0.441683) | 0.003565 / 0.000200 (0.003366) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024229 / 0.037411 (-0.013183) | 0.074352 / 0.014526 (0.059826) | 0.087530 / 0.176557 (-0.089026) | 0.146478 / 0.737135 (-0.590658) | 0.085145 / 0.296338 (-0.211194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388395 / 0.215209 (0.173186) | 3.877623 / 2.077655 (1.799968) | 1.882444 / 1.504120 (0.378324) | 1.707871 / 1.541195 (0.166676) | 1.772132 / 1.468490 (0.303642) | 0.491937 / 4.584777 (-4.092840) | 3.057947 / 3.745712 (-0.687765) | 2.822390 / 5.269862 (-2.447471) | 1.879719 / 4.565676 (-2.685957) | 0.056830 / 0.424275 (-0.367445) | 0.006415 / 0.007607 (-0.001192) | 0.458945 / 0.226044 (0.232900) | 4.594502 / 2.268929 (2.325574) | 2.339677 / 55.444624 (-53.104948) | 1.983750 / 6.876477 (-4.892727) | 2.173792 / 2.142072 (0.031719) | 0.580390 / 4.805227 (-4.224838) | 0.124568 / 6.500664 (-6.376096) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265108 / 1.841788 (-0.576680) | 18.415254 / 8.074308 (10.340946) | 13.963829 / 10.191392 (3.772437) | 0.148926 / 0.680424 (-0.531498) | 0.016919 / 0.534201 (-0.517282) | 0.331082 / 0.579283 (-0.248201) | 0.345777 / 0.434364 (-0.088587) | 0.381123 / 0.540337 (-0.159214) | 0.543297 / 1.386936 (-0.843639) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003717 / 0.011008 (-0.007291) | 0.063653 / 0.038508 (0.025144) | 0.063723 / 0.023109 (0.040613) | 0.360233 / 0.275898 (0.084335) | 0.398353 / 0.323480 (0.074873) | 0.004696 / 0.007986 (-0.003290) | 0.002876 / 0.004328 (-0.001452) | 0.063057 / 0.004250 (0.058806) | 0.050258 / 0.037052 (0.013206) | 0.362946 / 0.258489 (0.104457) | 0.403260 / 0.293841 (0.109419) | 0.027738 / 0.128546 (-0.100809) | 0.008025 / 0.075646 (-0.067621) | 0.068781 / 0.419271 (-0.350491) | 0.042114 / 0.043533 (-0.001419) | 0.363546 / 0.255139 (0.108407) | 0.385640 / 0.283200 (0.102440) | 0.021757 / 0.141683 (-0.119926) | 1.482364 / 1.452155 (0.030209) | 1.571859 / 1.492716 (0.079143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235628 / 0.018006 (0.217622) | 0.439909 / 0.000490 (0.439419) | 0.003070 / 0.000200 (0.002870) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027045 / 0.037411 (-0.010366) | 0.080413 / 0.014526 (0.065887) | 0.088953 / 0.176557 (-0.087603) | 0.141907 / 0.737135 (-0.595228) | 0.090604 / 0.296338 (-0.205735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423250 / 0.215209 (0.208041) | 4.216510 / 2.077655 (2.138855) | 2.162946 / 1.504120 (0.658826) | 2.014561 / 1.541195 (0.473366) | 2.086347 / 1.468490 (0.617857) | 0.496591 / 4.584777 (-4.088186) | 3.089594 / 3.745712 (-0.656118) | 2.853640 / 5.269862 (-2.416221) | 1.878149 / 4.565676 (-2.687527) | 0.056914 / 0.424275 (-0.367361) | 0.006762 / 0.007607 (-0.000845) | 0.493470 / 0.226044 (0.267426) | 4.929966 / 2.268929 (2.661037) | 2.640885 / 55.444624 (-52.803739) | 2.335950 / 6.876477 (-4.540527) | 2.565866 / 2.142072 (0.423793) | 0.585433 / 4.805227 (-4.219794) | 0.124969 / 6.500664 (-6.375695) | 0.062361 / 0.075469 (-0.013108) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369144 / 1.841788 (-0.472644) | 19.037582 / 8.074308 (10.963274) | 14.069141 / 10.191392 (3.877749) | 0.146469 / 0.680424 (-0.533954) | 0.016911 / 0.534201 (-0.517290) | 0.336802 / 0.579283 (-0.242482) | 0.336411 / 0.434364 (-0.097953) | 0.392360 / 0.540337 (-0.147977) | 0.536078 / 1.386936 (-0.850858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12cfc1196e62847e2e8239fbd727a02cbc86ddec \"CML watermark\")\n" ]
2023-08-07T15:41:25
2023-08-08T15:24:59
2023-08-08T15:16:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6127", "html_url": "https://github.com/huggingface/datasets/pull/6127", "diff_url": "https://github.com/huggingface/datasets/pull/6127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6127.patch", "merged_at": "2023-08-08T15:16:22" }
This PR fixes 3 authentication issues: - Fix authentication when passing `token`. - Fix authentication in `Audio.decode_example` and `Image.decode_example`. - Fix authentication to resolve `data_files` in repositories without script. This PR also fixes our CI so that we properly test when passing `token` and we do not use the token stored in `HfFolder`. Fix #6126. ## Details ### Fix authentication when passing `token` See c0a77dc943de68a17f23f141517028c734c78623 The root issue was caused when the `token` was set in an already instantiated `DownloadConfig` and thus not propagated to `self._storage_options`: ```python download_config.token = token ``` As this usage pattern is very common, the fix consists in overriding `DownloadConfig.__setattr__`. This fixes authentication issues in the following functions: - `load_dataset` and `load_dataset_builder` - `Dataset.push_to_hub` and `Dataset.push_to_hub` - `inspect.get_dataset_config_info`, `inspect.get_dataset_infos` and `inspect.get_dataset_split_names` ### Fix authentication in `Audio.decode_example` and `Image.decode_example`. See: 58e62af004b6b8b84dcfd897a4bc71637cfa6c3f The `token` was not set because the `repo_id` was wrongly tried to be parsed from an HTTP URL (`"http://..."`), instead of an HFFileSystem URL (`"hf://"`) ### Fix authentication to resolve `data_files` in repositories without script See: e4684fc1032321abf0d494b0c130ea7c82ebda80 This is fixed by passing `download_config` to the function `create_builder_configs_from_metadata_configs`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6127/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6126/comments
https://api.github.com/repos/huggingface/datasets/issues/6126/events
https://github.com/huggingface/datasets/issues/6126
1,839,675,320
I_kwDODunzps5tpze4
6,126
Private datasets do not load when passing token
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.", "I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything worked well even with `DownloadMode.FORCE_REDOWNLOAD`).", "We are planning to do a patch release today, after the merge of the fix:\r\n- #6127\r\n\r\nIn the meantime, the problem can be circumvented by passing `download_config` instead:\r\n```python\r\nfrom datasets import DownloadConfig, load_dataset\r\n\r\nload_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n``` ", "> We are planning to do a patch release today, after the merge of the fix:\r\n> \r\n> * [Fix authentication issues #6127](https://github.com/huggingface/datasets/pull/6127)\r\n> \r\n> \r\n> In the meantime, the problem can be circumvented by passing `download_config` instead:\r\n> \r\n> ```python\r\n> from datasets import DownloadConfig, load_dataset\r\n> \r\n> load_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n> ```\r\n\r\nThis did not work for me (there was some other error with the split being an unexpected size 0). Downgrading to 2.13 fixed it...." ]
2023-08-07T15:06:47
2023-08-08T15:16:23
2023-08-08T15:16:23
MEMBER
null
null
null
### Describe the bug Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`. This is a non-planned backward incompatible breaking change. Note that private datasets do load if instead `download_config` is passed: ```python from datasets import DownloadConfig, load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>")) ds ``` gives ``` Dataset({ features: ['text'], num_rows: 4 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") ``` gives ``` --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) [<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1793 download_config = download_config.copy() if download_config else DownloadConfig() 1794 download_config.storage_options.update(storage_options) -> 1795 dataset_module = dataset_module_factory( 1796 path, 1797 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1485 if isinstance(e1, EmptyDatasetError): -> 1486 raise e1 from None 1487 if isinstance(e1, FileNotFoundError): 1488 raise FileNotFoundError( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1474 download_config=download_config, 1475 download_mode=download_mode, -> 1476 ).get_module() 1477 except ( 1478 Exception [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self) 1030 sanitize_patterns(self.data_files) 1031 if self.data_files is not None -> 1032 else get_data_patterns(base_path, download_config=self.download_config) 1033 ) 1034 data_files = DataFilesDict.from_patterns( [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config) 457 return _get_data_files_patterns(resolver) 458 except FileNotFoundError: --> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 460 461 EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files ``` ### Expected behavior The dataset should load. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6126/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6125/comments
https://api.github.com/repos/huggingface/datasets/issues/6125/events
https://github.com/huggingface/datasets/issues/6125
1,837,980,986
I_kwDODunzps5tjV06
6,125
Reinforcement Learning and Robotics are not task categories in HF datasets metadata
{ "login": "StoneT2000", "id": 35373228, "node_id": "MDQ6VXNlcjM1MzczMjI4", "avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StoneT2000", "html_url": "https://github.com/StoneT2000", "followers_url": "https://api.github.com/users/StoneT2000/followers", "following_url": "https://api.github.com/users/StoneT2000/following{/other_user}", "gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions", "organizations_url": "https://api.github.com/users/StoneT2000/orgs", "repos_url": "https://api.github.com/users/StoneT2000/repos", "events_url": "https://api.github.com/users/StoneT2000/events{/privacy}", "received_events_url": "https://api.github.com/users/StoneT2000/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-05T23:59:42
2023-08-05T23:59:42
null
NONE
null
null
null
### Describe the bug In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves. ### Steps to reproduce the bug 1. Create a new dataset on Hugging face 2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit ### Expected behavior Expected to be able to add RL and robotics as task categories as some previous datasets have these tags ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6125/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6124/comments
https://api.github.com/repos/huggingface/datasets/issues/6124/events
https://github.com/huggingface/datasets/issues/6124
1,837,868,112
I_kwDODunzps5ti6RQ
6,124
Datasets crashing runs due to KeyError
{ "login": "conceptofmind", "id": 25208228, "node_id": "MDQ6VXNlcjI1MjA4MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conceptofmind", "html_url": "https://github.com/conceptofmind", "followers_url": "https://api.github.com/users/conceptofmind/followers", "following_url": "https://api.github.com/users/conceptofmind/following{/other_user}", "gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}", "starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions", "organizations_url": "https://api.github.com/users/conceptofmind/orgs", "repos_url": "https://api.github.com/users/conceptofmind/repos", "events_url": "https://api.github.com/users/conceptofmind/events{/privacy}", "received_events_url": "https://api.github.com/users/conceptofmind/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-05T17:48:56
2023-08-05T17:48:56
null
NONE
null
null
null
### Describe the bug Hi all, I have been running into a pretty persistent issue recently when trying to load datasets. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` I receive a KeyError which crashes the runs. ``` Traceback (most recent call last): main() train_dataset = load_dataset( ^^^^^^^^^^^^^ builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ raise e1 from None ).get_module() ^^^^^^^^^^^^ else get_data_patterns(base_path, download_config=self.download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return _get_data_files_patterns(resolver) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ data_files = pattern_resolver(pattern) ^^^^^^^^^^^^^^^^^^^^^^^^^ fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)] ^^^^^^^^^^^^^^ allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs): listing = self.ls(path, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "last_modified": parse_datetime(tree_item["lastCommit"]["date"]), ~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 'lastCommit' ``` Any help would be greatly appreciated. Thank you, Enrico ### Steps to reproduce the bug Load the dataset from the Huggingface hub. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` ### Expected behavior Loads the dataset. ### Environment info datasets-2.14.3 CUDA 11.8 Python 3.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6124/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6123/comments
https://api.github.com/repos/huggingface/datasets/issues/6123/events
https://github.com/huggingface/datasets/issues/6123
1,837,789,294
I_kwDODunzps5tinBu
6,123
Inaccurate Bounding Boxes in "wildreceipt" Dataset
{ "login": "HamzaGbada", "id": 50714796, "node_id": "MDQ6VXNlcjUwNzE0Nzk2", "avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HamzaGbada", "html_url": "https://github.com/HamzaGbada", "followers_url": "https://api.github.com/users/HamzaGbada/followers", "following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}", "gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}", "starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions", "organizations_url": "https://api.github.com/users/HamzaGbada/orgs", "repos_url": "https://api.github.com/users/HamzaGbada/repos", "events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}", "received_events_url": "https://api.github.com/users/HamzaGbada/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-05T14:34:13
2023-08-06T13:27:25
null
NONE
null
null
null
### Describe the bug I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset. To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face: **Example 1:** ![image](https://github.com/huggingface/datasets/assets/50714796/7a6604d2-899d-4102-a008-1a28c90698f1) ![image](https://github.com/huggingface/datasets/assets/50714796/eba458c7-d3af-4868-a520-8b683aa96f66) ![image](https://github.com/huggingface/datasets/assets/50714796/9f394891-5f5b-46f7-8e52-071b724aedab) **Example 2:** ![image](https://github.com/huggingface/datasets/assets/50714796/a2b2a8d3-124e-4990-b64a-5133cf4be2fe) ![image](https://github.com/huggingface/datasets/assets/50714796/6ee25642-35aa-40ad-ac1e-899d33be90df) ![image](https://github.com/huggingface/datasets/assets/50714796/5e42ff91-9fc4-4520-8803-0e225656f96c) It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar). This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated. ### Steps to reproduce the bug ```python import matplotlib.pyplot as plt from datasets import load_dataset # Define functions to convert bounding box formats def convert_format1(box): x, y, w, h = box x2, y2 = x + w, y + h return [x, y, x2, y2] def convert_format2(box): x1, y1, x2, y2 = box return [x1, y1, x2, y2] def plot_cropped_image(image, box, title): cropped_image = image.crop(box) plt.imshow(cropped_image) plt.title(title) plt.axis('off') plt.savefig(title+'.png') plt.show() doc_index = 1 word_index = 3 dataset = load_dataset("Theivaprakasham/wildreceipt")['train'] bbox_hugging_face = dataset[doc_index]['bboxes'][word_index] text_unit_face = dataset[doc_index]['words'][word_index] common_box_hugface_1 = convert_format1(bbox_hugging_face) common_box_hugface_2 = convert_format2(bbox_hugging_face) plot_cropped_image(image_hugging, common_box_hugface_1, f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}') plot_cropped_image(image_hugging, common_box_hugface_2, f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}') ``` ### Expected behavior The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset. ### Environment info - Python version: 3.8 - Hugging Face datasets version: 2.14.2 - Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6123/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6122/comments
https://api.github.com/repos/huggingface/datasets/issues/6122/events
https://github.com/huggingface/datasets/issues/6122
1,837,335,721
I_kwDODunzps5tg4Sp
6,122
Upload README via `push_to_hub`
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-08-04T21:00:27
2023-08-04T21:01:19
null
NONE
null
null
null
### Feature request `push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually. However, I do discover snippets to intialize a README for every `push_to_hub`: ``` dataset_card = ( DatasetCard( "---\n" + str(dataset_card_data) + "\n---\n" + f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)' ) if dataset_card is None else dataset_card ) HfApi(endpoint=config.HF_ENDPOINT).upload_file( path_or_fileobj=str(dataset_card).encode(), path_in_repo="README.md", repo_id=repo_id, token=token, repo_type="dataset", revision=branch, ) ``` So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation. ### Motivation as elabrated above. ### Your contribution I might be able to make a pr.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6122/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6121/comments
https://api.github.com/repos/huggingface/datasets/issues/6121/events
https://github.com/huggingface/datasets/pull/6121
1,836,761,712
PR_kwDODunzps5XMsWd
6,121
Small typo in the code example of create imagefolder dataset
{ "login": "WangXin93", "id": 19688994, "node_id": "MDQ6VXNlcjE5Njg4OTk0", "avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangXin93", "html_url": "https://github.com/WangXin93", "followers_url": "https://api.github.com/users/WangXin93/followers", "following_url": "https://api.github.com/users/WangXin93/following{/other_user}", "gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions", "organizations_url": "https://api.github.com/users/WangXin93/orgs", "repos_url": "https://api.github.com/users/WangXin93/repos", "events_url": "https://api.github.com/users/WangXin93/events{/privacy}", "received_events_url": "https://api.github.com/users/WangXin93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin" ]
2023-08-04T13:36:59
2023-08-04T13:45:32
2023-08-04T13:41:43
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6121", "html_url": "https://github.com/huggingface/datasets/pull/6121", "diff_url": "https://github.com/huggingface/datasets/pull/6121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6121.patch", "merged_at": null }
Fix type of code example of load imagefolder dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6121/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6120/comments
https://api.github.com/repos/huggingface/datasets/issues/6120/events
https://github.com/huggingface/datasets/issues/6120
1,836,026,938
I_kwDODunzps5tb4w6
6,120
Lookahead streaming support?
{ "login": "PicoCreator", "id": 17175484, "node_id": "MDQ6VXNlcjE3MTc1NDg0", "avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PicoCreator", "html_url": "https://github.com/PicoCreator", "followers_url": "https://api.github.com/users/PicoCreator/followers", "following_url": "https://api.github.com/users/PicoCreator/following{/other_user}", "gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}", "starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions", "organizations_url": "https://api.github.com/users/PicoCreator/orgs", "repos_url": "https://api.github.com/users/PicoCreator/repos", "events_url": "https://api.github.com/users/PicoCreator/events{/privacy}", "received_events_url": "https://api.github.com/users/PicoCreator/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-08-04T04:01:52
2023-08-04T04:02:04
null
NONE
null
null
null
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mapping instruction/tokenizer specific) Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained. With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches. ### Motivation Faster streaming performance, while training over extra large TB sized datasets ### Your contribution I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6120/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6119/comments
https://api.github.com/repos/huggingface/datasets/issues/6119/events
https://github.com/huggingface/datasets/pull/6119
1,835,996,350
PR_kwDODunzps5XKI19
6,119
[Docs] Add description of `select_columns` to guide
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6119). All of your documentation changes will be reflected on that endpoint." ]
2023-08-04T03:13:30
2023-08-04T23:15:51
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6119", "html_url": "https://github.com/huggingface/datasets/pull/6119", "diff_url": "https://github.com/huggingface/datasets/pull/6119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6119.patch", "merged_at": null }
Closes #6116
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6119/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6118/comments
https://api.github.com/repos/huggingface/datasets/issues/6118/events
https://github.com/huggingface/datasets/issues/6118
1,835,940,417
I_kwDODunzps5tbjpB
6,118
IterableDataset.from_generator() fails with pickle error when provided a generator or iterator
{ "login": "finkga", "id": 1281051, "node_id": "MDQ6VXNlcjEyODEwNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finkga", "html_url": "https://github.com/finkga", "followers_url": "https://api.github.com/users/finkga/followers", "following_url": "https://api.github.com/users/finkga/following{/other_user}", "gists_url": "https://api.github.com/users/finkga/gists{/gist_id}", "starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finkga/subscriptions", "organizations_url": "https://api.github.com/users/finkga/orgs", "repos_url": "https://api.github.com/users/finkga/repos", "events_url": "https://api.github.com/users/finkga/events{/privacy}", "received_events_url": "https://api.github.com/users/finkga/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-04T01:45:04
2023-08-04T01:45:04
null
NONE
null
null
null
### Describe the bug **Description** Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator. **Code example** ``` def line_generator(files: List[Path]): if isinstance(files, str): files = [Path(files)] for file in files: if isinstance(file, str): file = Path(file) yield from open(file,'r').readlines() ... model_training_files = ['file1.txt', 'file2.txt', 'file3.txt'] train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files)) ``` **Traceback** Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__ self.gen.throw(type, value, traceback) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields yield File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps dump(obj, file) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump Pickler(file, recurse=True).dump(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump StockPickler.dump(self, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump self.save(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems save(v) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save rv = reduce(self.proto) TypeError: cannot pickle 'generator' object ### Steps to reproduce the bug 1. Create a set of text files to iterate over. 2. Create a generator that returns the lines in each file until all files are exhausted. 3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator(). 4. Wait for the explosion. ### Expected behavior I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function. ### Environment info datasets.__version__ == '2.13.1' Python 3.9.6 Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6118/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6117/comments
https://api.github.com/repos/huggingface/datasets/issues/6117/events
https://github.com/huggingface/datasets/pull/6117
1,835,213,848
PR_kwDODunzps5XHktw
6,117
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012516 / 0.011353 (0.001163) | 0.004725 / 0.011008 (-0.006283) | 0.112245 / 0.038508 (0.073736) | 0.079146 / 0.023109 (0.056037) | 0.386415 / 0.275898 (0.110517) | 0.420441 / 0.323480 (0.096961) | 0.005682 / 0.007986 (-0.002304) | 0.004169 / 0.004328 (-0.000160) | 0.077847 / 0.004250 (0.073597) | 0.055763 / 0.037052 (0.018711) | 0.385529 / 0.258489 (0.127040) | 0.422711 / 0.293841 (0.128870) | 0.047212 / 0.128546 (-0.081334) | 0.013711 / 0.075646 (-0.061935) | 0.342856 / 0.419271 (-0.076416) | 0.066788 / 0.043533 (0.023255) | 0.380728 / 0.255139 (0.125589) | 0.416241 / 0.283200 (0.133041) | 0.034676 / 0.141683 (-0.107007) | 1.679661 / 1.452155 (0.227506) | 1.838014 / 1.492716 (0.345297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219556 / 0.018006 (0.201550) | 0.524728 / 0.000490 (0.524238) | 0.005045 / 0.000200 (0.004845) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.085937 / 0.014526 (0.071412) | 0.099245 / 0.176557 (-0.077311) | 0.158995 / 0.737135 (-0.578141) | 0.101504 / 0.296338 (-0.194835) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582200 / 0.215209 (0.366991) | 5.794340 / 2.077655 (3.716685) | 2.473635 / 1.504120 (0.969515) | 2.168135 / 1.541195 (0.626941) | 2.215886 / 1.468490 (0.747396) | 0.855599 / 4.584777 (-3.729178) | 5.003067 / 3.745712 (1.257354) | 4.503566 / 5.269862 (-0.766295) | 2.912248 / 4.565676 (-1.653428) | 0.103267 / 0.424275 (-0.321008) | 0.012114 / 0.007607 (0.004507) | 0.712240 / 0.226044 (0.486196) | 7.131946 / 2.268929 (4.863017) | 3.280052 / 55.444624 (-52.164573) | 2.583472 / 6.876477 (-4.293004) | 2.820758 / 2.142072 (0.678686) | 1.132097 / 4.805227 (-3.673131) | 0.232191 / 6.500664 (-6.268473) | 0.082966 / 0.075469 (0.007497) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581125 / 1.841788 (-0.260662) | 22.723878 / 8.074308 (14.649570) | 19.969347 / 10.191392 (9.777955) | 0.234365 / 0.680424 (-0.446059) | 0.030245 / 0.534201 (-0.503956) | 0.470843 / 0.579283 (-0.108440) | 0.558069 / 0.434364 (0.123705) | 0.534878 / 0.540337 (-0.005460) | 0.801025 / 1.386936 (-0.585911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008524 / 0.011353 (-0.002829) | 0.005083 / 0.011008 (-0.005925) | 0.078054 / 0.038508 (0.039546) | 0.082025 / 0.023109 (0.058915) | 0.458027 / 0.275898 (0.182129) | 0.498232 / 0.323480 (0.174752) | 0.005938 / 0.007986 (-0.002048) | 0.003776 / 0.004328 (-0.000553) | 0.080413 / 0.004250 (0.076163) | 0.060485 / 0.037052 (0.023433) | 0.462816 / 0.258489 (0.204327) | 0.513970 / 0.293841 (0.220129) | 0.047574 / 0.128546 (-0.080973) | 0.013424 / 0.075646 (-0.062222) | 0.087707 / 0.419271 (-0.331565) | 0.065007 / 0.043533 (0.021474) | 0.465844 / 0.255139 (0.210705) | 0.498474 / 0.283200 (0.215274) | 0.033518 / 0.141683 (-0.108164) | 1.737507 / 1.452155 (0.285352) | 1.848291 / 1.492716 (0.355574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316710 / 0.018006 (0.298703) | 0.504415 / 0.000490 (0.503925) | 0.042128 / 0.000200 (0.041928) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032097 / 0.037411 (-0.005314) | 0.099371 / 0.014526 (0.084845) | 0.109311 / 0.176557 (-0.067246) | 0.177373 / 0.737135 (-0.559762) | 0.110753 / 0.296338 (-0.185585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688060 / 0.215209 (0.472851) | 6.255219 / 2.077655 (4.177564) | 2.696845 / 1.504120 (1.192725) | 2.395424 / 1.541195 (0.854230) | 2.414870 / 1.468490 (0.946380) | 0.865704 / 4.584777 (-3.719073) | 5.086828 / 3.745712 (1.341116) | 4.648107 / 5.269862 (-0.621754) | 3.091119 / 4.565676 (-1.474558) | 0.101787 / 0.424275 (-0.322489) | 0.008829 / 0.007607 (0.001222) | 0.772398 / 0.226044 (0.546354) | 7.700366 / 2.268929 (5.431438) | 3.608632 / 55.444624 (-51.835992) | 2.923309 / 6.876477 (-3.953168) | 2.952141 / 2.142072 (0.810069) | 1.093006 / 4.805227 (-3.712221) | 0.224363 / 6.500664 (-6.276301) | 0.074927 / 0.075469 (-0.000542) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638414 / 1.841788 (-0.203374) | 23.486781 / 8.074308 (15.412473) | 21.129104 / 10.191392 (10.937712) | 0.259955 / 0.680424 (-0.420469) | 0.027305 / 0.534201 (-0.506895) | 0.464448 / 0.579283 (-0.114835) | 0.553737 / 0.434364 (0.119373) | 0.571318 / 0.540337 (0.030981) | 0.772917 / 1.386936 (-0.614019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ec5ee9e78b464364796651d995823c7ecb0f951 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009093 / 0.011353 (-0.002260) | 0.005283 / 0.011008 (-0.005725) | 0.112299 / 0.038508 (0.073791) | 0.081341 / 0.023109 (0.058232) | 0.363799 / 0.275898 (0.087901) | 0.409261 / 0.323480 (0.085781) | 0.006400 / 0.007986 (-0.001586) | 0.003965 / 0.004328 (-0.000363) | 0.074389 / 0.004250 (0.070139) | 0.060654 / 0.037052 (0.023602) | 0.391046 / 0.258489 (0.132557) | 0.430514 / 0.293841 (0.136673) | 0.054900 / 0.128546 (-0.073646) | 0.017972 / 0.075646 (-0.057675) | 0.410875 / 0.419271 (-0.008396) | 0.067405 / 0.043533 (0.023873) | 0.371468 / 0.255139 (0.116329) | 0.435061 / 0.283200 (0.151861) | 0.038063 / 0.141683 (-0.103620) | 1.733509 / 1.452155 (0.281354) | 1.833899 / 1.492716 (0.341182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243230 / 0.018006 (0.225224) | 0.605636 / 0.000490 (0.605146) | 0.004890 / 0.000200 (0.004690) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027624 / 0.037411 (-0.009787) | 0.084799 / 0.014526 (0.070273) | 0.104405 / 0.176557 (-0.072152) | 0.165383 / 0.737135 (-0.571752) | 0.102083 / 0.296338 (-0.194255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578334 / 0.215209 (0.363125) | 5.369520 / 2.077655 (3.291866) | 2.294174 / 1.504120 (0.790055) | 2.054195 / 1.541195 (0.513000) | 2.007304 / 1.468490 (0.538814) | 0.839283 / 4.584777 (-3.745494) | 5.262288 / 3.745712 (1.516576) | 4.363346 / 5.269862 (-0.906516) | 2.854903 / 4.565676 (-1.710773) | 0.096975 / 0.424275 (-0.327300) | 0.008237 / 0.007607 (0.000630) | 0.646746 / 0.226044 (0.420702) | 6.250621 / 2.268929 (3.981693) | 2.900377 / 55.444624 (-52.544247) | 2.283238 / 6.876477 (-4.593239) | 2.443785 / 2.142072 (0.301713) | 0.991719 / 4.805227 (-3.813508) | 0.189755 / 6.500664 (-6.310909) | 0.067906 / 0.075469 (-0.007563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.515563 / 1.841788 (-0.326225) | 21.956499 / 8.074308 (13.882191) | 19.161750 / 10.191392 (8.970358) | 0.238199 / 0.680424 (-0.442225) | 0.026771 / 0.534201 (-0.507430) | 0.450195 / 0.579283 (-0.129088) | 0.585168 / 0.434364 (0.150804) | 0.522945 / 0.540337 (-0.017393) | 0.776244 / 1.386936 (-0.610693) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.005021 / 0.011008 (-0.005988) | 0.087308 / 0.038508 (0.048800) | 0.077760 / 0.023109 (0.054650) | 0.425313 / 0.275898 (0.149415) | 0.451470 / 0.323480 (0.127990) | 0.006848 / 0.007986 (-0.001137) | 0.004812 / 0.004328 (0.000484) | 0.071198 / 0.004250 (0.066947) | 0.058325 / 0.037052 (0.021273) | 0.427411 / 0.258489 (0.168922) | 0.466069 / 0.293841 (0.172228) | 0.048686 / 0.128546 (-0.079861) | 0.011841 / 0.075646 (-0.063806) | 0.086225 / 0.419271 (-0.333047) | 0.060500 / 0.043533 (0.016967) | 0.435580 / 0.255139 (0.180441) | 0.456919 / 0.283200 (0.173719) | 0.035094 / 0.141683 (-0.106588) | 1.582805 / 1.452155 (0.130650) | 1.717838 / 1.492716 (0.225122) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283967 / 0.018006 (0.265960) | 0.517496 / 0.000490 (0.517006) | 0.014747 / 0.000200 (0.014547) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083835 / 0.014526 (0.069309) | 0.099157 / 0.176557 (-0.077400) | 0.173210 / 0.737135 (-0.563925) | 0.094212 / 0.296338 (-0.202127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535720 / 0.215209 (0.320511) | 5.273730 / 2.077655 (3.196075) | 2.422560 / 1.504120 (0.918440) | 2.131416 / 1.541195 (0.590222) | 2.192000 / 1.468490 (0.723510) | 0.708469 / 4.584777 (-3.876308) | 4.758092 / 3.745712 (1.012380) | 3.940729 / 5.269862 (-1.329133) | 2.553093 / 4.565676 (-2.012583) | 0.084895 / 0.424275 (-0.339380) | 0.008730 / 0.007607 (0.001123) | 0.646975 / 0.226044 (0.420930) | 6.294811 / 2.268929 (4.025883) | 3.293964 / 55.444624 (-52.150660) | 2.568985 / 6.876477 (-4.307492) | 2.743786 / 2.142072 (0.601713) | 0.899733 / 4.805227 (-3.905494) | 0.193484 / 6.500664 (-6.307181) | 0.070012 / 0.075469 (-0.005457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502255 / 1.841788 (-0.339532) | 20.690234 / 8.074308 (12.615926) | 18.375791 / 10.191392 (8.184399) | 0.200135 / 0.680424 (-0.480289) | 0.029434 / 0.534201 (-0.504767) | 0.477267 / 0.579283 (-0.102016) | 0.566869 / 0.434364 (0.132505) | 0.543756 / 0.540337 (0.003418) | 0.700476 / 1.386936 (-0.686460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef17d9fd6c648bb41d43ba301c3de4d7b6f833d8 \"CML watermark\")\n" ]
2023-08-03T14:46:04
2023-08-03T14:56:59
2023-08-03T14:46:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6117", "html_url": "https://github.com/huggingface/datasets/pull/6117", "diff_url": "https://github.com/huggingface/datasets/pull/6117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6117.patch", "merged_at": "2023-08-03T14:46:18" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6117/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6116/comments
https://api.github.com/repos/huggingface/datasets/issues/6116/events
https://github.com/huggingface/datasets/issues/6116
1,835,098,484
I_kwDODunzps5tYWF0
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Great idea, feel free to open a PR! :)" ]
2023-08-03T13:45:10
2023-08-03T17:40:58
null
NONE
null
null
null
### Feature request The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide. ### Motivation This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480. Mentioning it in the guide would help future users discover this added feature. ### Your contribution I could submit a PR to add a brief description of the function to said guide.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6116/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6115/comments
https://api.github.com/repos/huggingface/datasets/issues/6115/events
https://github.com/huggingface/datasets/pull/6115
1,834,765,485
PR_kwDODunzps5XGChP
6,115
Release: 2.14.3
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007578 / 0.011353 (-0.003775) | 0.004271 / 0.011008 (-0.006738) | 0.086607 / 0.038508 (0.048098) | 0.063209 / 0.023109 (0.040099) | 0.351724 / 0.275898 (0.075826) | 0.399261 / 0.323480 (0.075781) | 0.004767 / 0.007986 (-0.003219) | 0.003487 / 0.004328 (-0.000842) | 0.071483 / 0.004250 (0.067233) | 0.051281 / 0.037052 (0.014229) | 0.387726 / 0.258489 (0.129237) | 0.408446 / 0.293841 (0.114605) | 0.041189 / 0.128546 (-0.087357) | 0.012446 / 0.075646 (-0.063200) | 0.331147 / 0.419271 (-0.088124) | 0.056721 / 0.043533 (0.013188) | 0.361306 / 0.255139 (0.106167) | 0.409651 / 0.283200 (0.126451) | 0.035485 / 0.141683 (-0.106198) | 1.461391 / 1.452155 (0.009236) | 1.554820 / 1.492716 (0.062104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237119 / 0.018006 (0.219113) | 0.518731 / 0.000490 (0.518241) | 0.004192 / 0.000200 (0.003992) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024912 / 0.037411 (-0.012499) | 0.089420 / 0.014526 (0.074894) | 0.091209 / 0.176557 (-0.085347) | 0.152580 / 0.737135 (-0.584555) | 0.089660 / 0.296338 (-0.206678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515223 / 0.215209 (0.300014) | 5.328359 / 2.077655 (3.250705) | 1.974326 / 1.504120 (0.470206) | 1.665216 / 1.541195 (0.124021) | 1.736040 / 1.468490 (0.267550) | 0.734746 / 4.584777 (-3.850031) | 4.186613 / 3.745712 (0.440901) | 3.535760 / 5.269862 (-1.734102) | 2.333247 / 4.565676 (-2.232429) | 0.071845 / 0.424275 (-0.352430) | 0.006147 / 0.007607 (-0.001460) | 0.546649 / 0.226044 (0.320605) | 5.452281 / 2.268929 (3.183353) | 2.512984 / 55.444624 (-52.931640) | 2.104210 / 6.876477 (-4.772267) | 2.409251 / 2.142072 (0.267178) | 0.822797 / 4.805227 (-3.982430) | 0.166648 / 6.500664 (-6.334016) | 0.056350 / 0.075469 (-0.019119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397798 / 1.841788 (-0.443989) | 20.549399 / 8.074308 (12.475091) | 19.118168 / 10.191392 (8.926776) | 0.216361 / 0.680424 (-0.464063) | 0.027064 / 0.534201 (-0.507136) | 0.410762 / 0.579283 (-0.168521) | 0.559225 / 0.434364 (0.124861) | 0.468028 / 0.540337 (-0.072309) | 0.691520 / 1.386936 (-0.695416) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003879 / 0.011008 (-0.007130) | 0.058723 / 0.038508 (0.020215) | 0.057202 / 0.023109 (0.034092) | 0.344397 / 0.275898 (0.068499) | 0.360388 / 0.323480 (0.036908) | 0.005502 / 0.007986 (-0.002483) | 0.004101 / 0.004328 (-0.000227) | 0.058168 / 0.004250 (0.053917) | 0.059112 / 0.037052 (0.022060) | 0.362206 / 0.258489 (0.103717) | 0.386444 / 0.293841 (0.092603) | 0.036613 / 0.128546 (-0.091934) | 0.010482 / 0.075646 (-0.065165) | 0.065850 / 0.419271 (-0.353421) | 0.046528 / 0.043533 (0.002995) | 0.349568 / 0.255139 (0.094429) | 0.360181 / 0.283200 (0.076981) | 0.029030 / 0.141683 (-0.112653) | 1.314569 / 1.452155 (-0.137586) | 1.422393 / 1.492716 (-0.070324) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281554 / 0.018006 (0.263548) | 0.608018 / 0.000490 (0.607528) | 0.004568 / 0.000200 (0.004368) | 0.000182 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023515 / 0.037411 (-0.013896) | 0.072994 / 0.014526 (0.058468) | 0.080688 / 0.176557 (-0.095868) | 0.125904 / 0.737135 (-0.611232) | 0.085457 / 0.296338 (-0.210882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471530 / 0.215209 (0.256321) | 4.796197 / 2.077655 (2.718542) | 2.189181 / 1.504120 (0.685061) | 1.886649 / 1.541195 (0.345454) | 1.871067 / 1.468490 (0.402577) | 0.661043 / 4.584777 (-3.923734) | 4.344027 / 3.745712 (0.598315) | 3.656967 / 5.269862 (-1.612895) | 2.286033 / 4.565676 (-2.279644) | 0.079146 / 0.424275 (-0.345129) | 0.006840 / 0.007607 (-0.000767) | 0.588750 / 0.226044 (0.362706) | 6.301286 / 2.268929 (4.032357) | 3.074702 / 55.444624 (-52.369923) | 2.398739 / 6.876477 (-4.477738) | 2.555057 / 2.142072 (0.412985) | 0.874189 / 4.805227 (-3.931038) | 0.191423 / 6.500664 (-6.309241) | 0.061227 / 0.075469 (-0.014242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472763 / 1.841788 (-0.369024) | 19.441304 / 8.074308 (11.366996) | 15.974276 / 10.191392 (5.782884) | 0.172503 / 0.680424 (-0.507921) | 0.027016 / 0.534201 (-0.507185) | 0.356085 / 0.579283 (-0.223198) | 0.473251 / 0.434364 (0.038887) | 0.427949 / 0.540337 (-0.112388) | 0.588924 / 1.386936 (-0.798013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0973da6e60ac7c1d24229ba6aa6881747b21858a \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003558 / 0.011008 (-0.007450) | 0.080576 / 0.038508 (0.042068) | 0.066542 / 0.023109 (0.043432) | 0.323997 / 0.275898 (0.048099) | 0.369828 / 0.323480 (0.046348) | 0.004896 / 0.007986 (-0.003090) | 0.002909 / 0.004328 (-0.001419) | 0.062553 / 0.004250 (0.058302) | 0.049795 / 0.037052 (0.012742) | 0.321369 / 0.258489 (0.062880) | 0.422860 / 0.293841 (0.129019) | 0.027394 / 0.128546 (-0.101152) | 0.007954 / 0.075646 (-0.067693) | 0.264122 / 0.419271 (-0.155149) | 0.044881 / 0.043533 (0.001349) | 0.316702 / 0.255139 (0.061563) | 0.374718 / 0.283200 (0.091518) | 0.021728 / 0.141683 (-0.119955) | 1.394456 / 1.452155 (-0.057699) | 1.474936 / 1.492716 (-0.017780) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191902 / 0.018006 (0.173896) | 0.430468 / 0.000490 (0.429979) | 0.003790 / 0.000200 (0.003590) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024974 / 0.037411 (-0.012438) | 0.073053 / 0.014526 (0.058527) | 0.083801 / 0.176557 (-0.092756) | 0.143457 / 0.737135 (-0.593678) | 0.085099 / 0.296338 (-0.211240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428411 / 0.215209 (0.213202) | 4.278077 / 2.077655 (2.200422) | 2.230039 / 1.504120 (0.725919) | 2.057191 / 1.541195 (0.515996) | 2.120109 / 1.468490 (0.651619) | 0.495242 / 4.584777 (-4.089535) | 3.031299 / 3.745712 (-0.714413) | 2.802685 / 5.269862 (-2.467176) | 1.839828 / 4.565676 (-2.725849) | 0.056875 / 0.424275 (-0.367401) | 0.006446 / 0.007607 (-0.001161) | 0.498958 / 0.226044 (0.272913) | 4.980440 / 2.268929 (2.711511) | 2.659659 / 55.444624 (-52.784965) | 2.315174 / 6.876477 (-4.561303) | 2.475920 / 2.142072 (0.333848) | 0.586946 / 4.805227 (-4.218282) | 0.124291 / 6.500664 (-6.376373) | 0.060701 / 0.075469 (-0.014768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245062 / 1.841788 (-0.596725) | 18.201444 / 8.074308 (10.127136) | 13.723271 / 10.191392 (3.531879) | 0.130203 / 0.680424 (-0.550221) | 0.016773 / 0.534201 (-0.517428) | 0.332909 / 0.579283 (-0.246374) | 0.347469 / 0.434364 (-0.086895) | 0.381364 / 0.540337 (-0.158973) | 0.541723 / 1.386936 (-0.845213) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005934 / 0.011353 (-0.005419) | 0.003573 / 0.011008 (-0.007435) | 0.062195 / 0.038508 (0.023687) | 0.059026 / 0.023109 (0.035917) | 0.413993 / 0.275898 (0.138095) | 0.459552 / 0.323480 (0.136072) | 0.004610 / 0.007986 (-0.003376) | 0.002907 / 0.004328 (-0.001421) | 0.062983 / 0.004250 (0.058733) | 0.047797 / 0.037052 (0.010745) | 0.415461 / 0.258489 (0.156972) | 0.417424 / 0.293841 (0.123583) | 0.027098 / 0.128546 (-0.101449) | 0.008106 / 0.075646 (-0.067540) | 0.067600 / 0.419271 (-0.351672) | 0.041432 / 0.043533 (-0.002101) | 0.407861 / 0.255139 (0.152722) | 0.430774 / 0.283200 (0.147575) | 0.020738 / 0.141683 (-0.120945) | 1.435127 / 1.452155 (-0.017028) | 1.486961 / 1.492716 (-0.005755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231174 / 0.018006 (0.213168) | 0.421208 / 0.000490 (0.420718) | 0.005411 / 0.000200 (0.005211) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025362 / 0.037411 (-0.012049) | 0.078534 / 0.014526 (0.064008) | 0.085304 / 0.176557 (-0.091252) | 0.139048 / 0.737135 (-0.598087) | 0.087015 / 0.296338 (-0.209323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448506 / 0.215209 (0.233297) | 4.486694 / 2.077655 (2.409039) | 2.488022 / 1.504120 (0.983902) | 2.325321 / 1.541195 (0.784126) | 2.381311 / 1.468490 (0.912821) | 0.502102 / 4.584777 (-4.082675) | 3.018326 / 3.745712 (-0.727386) | 2.824922 / 5.269862 (-2.444940) | 1.857414 / 4.565676 (-2.708263) | 0.057514 / 0.424275 (-0.366761) | 0.006829 / 0.007607 (-0.000779) | 0.521939 / 0.226044 (0.295895) | 5.224393 / 2.268929 (2.955465) | 2.933132 / 55.444624 (-52.511492) | 2.661187 / 6.876477 (-4.215290) | 2.781950 / 2.142072 (0.639878) | 0.592927 / 4.805227 (-4.212300) | 0.126685 / 6.500664 (-6.373979) | 0.064188 / 0.075469 (-0.011281) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351107 / 1.841788 (-0.490681) | 18.344453 / 8.074308 (10.270145) | 13.838788 / 10.191392 (3.647396) | 0.157881 / 0.680424 (-0.522543) | 0.016636 / 0.534201 (-0.517565) | 0.331597 / 0.579283 (-0.247686) | 0.345573 / 0.434364 (-0.088791) | 0.397361 / 0.540337 (-0.142976) | 0.534289 / 1.386936 (-0.852647) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#582e722a76534904c0f3038d32ebb8db88ce9128 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006399 / 0.011353 (-0.004954) | 0.003872 / 0.011008 (-0.007136) | 0.083722 / 0.038508 (0.045214) | 0.068845 / 0.023109 (0.045736) | 0.329112 / 0.275898 (0.053214) | 0.343295 / 0.323480 (0.019815) | 0.005137 / 0.007986 (-0.002849) | 0.003303 / 0.004328 (-0.001026) | 0.064495 / 0.004250 (0.060245) | 0.051448 / 0.037052 (0.014395) | 0.322554 / 0.258489 (0.064065) | 0.361934 / 0.293841 (0.068093) | 0.030821 / 0.128546 (-0.097726) | 0.008482 / 0.075646 (-0.067164) | 0.288136 / 0.419271 (-0.131135) | 0.051935 / 0.043533 (0.008402) | 0.308283 / 0.255139 (0.053144) | 0.343421 / 0.283200 (0.060221) | 0.023639 / 0.141683 (-0.118044) | 1.485442 / 1.452155 (0.033288) | 1.533282 / 1.492716 (0.040565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218163 / 0.018006 (0.200157) | 0.464473 / 0.000490 (0.463983) | 0.003097 / 0.000200 (0.002897) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028650 / 0.037411 (-0.008761) | 0.083295 / 0.014526 (0.068769) | 0.096468 / 0.176557 (-0.080088) | 0.152086 / 0.737135 (-0.585050) | 0.102586 / 0.296338 (-0.193752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393038 / 0.215209 (0.177829) | 3.925514 / 2.077655 (1.847859) | 1.938419 / 1.504120 (0.434300) | 1.760265 / 1.541195 (0.219071) | 1.810024 / 1.468490 (0.341534) | 0.486232 / 4.584777 (-4.098545) | 3.618747 / 3.745712 (-0.126965) | 3.206950 / 5.269862 (-2.062912) | 1.999240 / 4.565676 (-2.566436) | 0.056986 / 0.424275 (-0.367289) | 0.007193 / 0.007607 (-0.000415) | 0.469313 / 0.226044 (0.243269) | 4.688670 / 2.268929 (2.419741) | 2.400332 / 55.444624 (-53.044292) | 2.074197 / 6.876477 (-4.802279) | 2.290823 / 2.142072 (0.148751) | 0.582339 / 4.805227 (-4.222888) | 0.134127 / 6.500664 (-6.366537) | 0.061061 / 0.075469 (-0.014408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272782 / 1.841788 (-0.569006) | 19.463375 / 8.074308 (11.389067) | 14.306819 / 10.191392 (4.115427) | 0.164608 / 0.680424 (-0.515816) | 0.018626 / 0.534201 (-0.515575) | 0.395225 / 0.579283 (-0.184058) | 0.408984 / 0.434364 (-0.025380) | 0.463364 / 0.540337 (-0.076974) | 0.630425 / 1.386936 (-0.756511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006465 / 0.011353 (-0.004888) | 0.003975 / 0.011008 (-0.007033) | 0.063643 / 0.038508 (0.025134) | 0.075214 / 0.023109 (0.052105) | 0.361734 / 0.275898 (0.085836) | 0.396664 / 0.323480 (0.073184) | 0.005251 / 0.007986 (-0.002735) | 0.003249 / 0.004328 (-0.001080) | 0.063841 / 0.004250 (0.059591) | 0.054504 / 0.037052 (0.017451) | 0.374791 / 0.258489 (0.116302) | 0.399205 / 0.293841 (0.105364) | 0.031355 / 0.128546 (-0.097192) | 0.008483 / 0.075646 (-0.067163) | 0.070234 / 0.419271 (-0.349037) | 0.048336 / 0.043533 (0.004803) | 0.373484 / 0.255139 (0.118345) | 0.382174 / 0.283200 (0.098974) | 0.022560 / 0.141683 (-0.119123) | 1.449799 / 1.452155 (-0.002355) | 1.525255 / 1.492716 (0.032539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228350 / 0.018006 (0.210343) | 0.444344 / 0.000490 (0.443855) | 0.003699 / 0.000200 (0.003499) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030681 / 0.037411 (-0.006731) | 0.087340 / 0.014526 (0.072814) | 0.098636 / 0.176557 (-0.077920) | 0.151665 / 0.737135 (-0.585471) | 0.100840 / 0.296338 (-0.195498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417857 / 0.215209 (0.202648) | 4.168407 / 2.077655 (2.090752) | 2.201758 / 1.504120 (0.697638) | 1.997834 / 1.541195 (0.456639) | 2.127693 / 1.468490 (0.659202) | 0.486429 / 4.584777 (-4.098348) | 3.676335 / 3.745712 (-0.069378) | 3.226268 / 5.269862 (-2.043594) | 2.027255 / 4.565676 (-2.538422) | 0.056759 / 0.424275 (-0.367516) | 0.007628 / 0.007607 (0.000021) | 0.500482 / 0.226044 (0.274438) | 4.996236 / 2.268929 (2.727307) | 2.628884 / 55.444624 (-52.815740) | 2.347611 / 6.876477 (-4.528866) | 2.551328 / 2.142072 (0.409255) | 0.582449 / 4.805227 (-4.222778) | 0.132844 / 6.500664 (-6.367821) | 0.061791 / 0.075469 (-0.013678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373718 / 1.841788 (-0.468070) | 19.921217 / 8.074308 (11.846909) | 14.209642 / 10.191392 (4.018250) | 0.185334 / 0.680424 (-0.495090) | 0.018228 / 0.534201 (-0.515973) | 0.395549 / 0.579283 (-0.183734) | 0.404446 / 0.434364 (-0.029918) | 0.472456 / 0.540337 (-0.067882) | 0.622739 / 1.386936 (-0.764197) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006007 / 0.011353 (-0.005346) | 0.003588 / 0.011008 (-0.007420) | 0.080334 / 0.038508 (0.041826) | 0.058932 / 0.023109 (0.035823) | 0.404613 / 0.275898 (0.128715) | 0.438377 / 0.323480 (0.114897) | 0.003468 / 0.007986 (-0.004518) | 0.003702 / 0.004328 (-0.000627) | 0.062936 / 0.004250 (0.058686) | 0.047987 / 0.037052 (0.010934) | 0.411409 / 0.258489 (0.152920) | 0.450244 / 0.293841 (0.156403) | 0.027007 / 0.128546 (-0.101539) | 0.007932 / 0.075646 (-0.067714) | 0.261390 / 0.419271 (-0.157882) | 0.044992 / 0.043533 (0.001459) | 0.409730 / 0.255139 (0.154591) | 0.433331 / 0.283200 (0.150131) | 0.020446 / 0.141683 (-0.121237) | 1.425418 / 1.452155 (-0.026736) | 1.479242 / 1.492716 (-0.013475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187375 / 0.018006 (0.169368) | 0.428532 / 0.000490 (0.428043) | 0.003406 / 0.000200 (0.003206) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024390 / 0.037411 (-0.013022) | 0.072571 / 0.014526 (0.058045) | 0.083513 / 0.176557 (-0.093044) | 0.144395 / 0.737135 (-0.592741) | 0.084813 / 0.296338 (-0.211526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409176 / 0.215209 (0.193967) | 4.078082 / 2.077655 (2.000428) | 1.913596 / 1.504120 (0.409476) | 1.718470 / 1.541195 (0.177275) | 1.753106 / 1.468490 (0.284616) | 0.494167 / 4.584777 (-4.090610) | 3.029531 / 3.745712 (-0.716181) | 2.807331 / 5.269862 (-2.462531) | 1.839471 / 4.565676 (-2.726206) | 0.057169 / 0.424275 (-0.367106) | 0.006433 / 0.007607 (-0.001175) | 0.482666 / 0.226044 (0.256621) | 4.817601 / 2.268929 (2.548673) | 2.449967 / 55.444624 (-52.994658) | 2.113891 / 6.876477 (-4.762586) | 2.399293 / 2.142072 (0.257221) | 0.578903 / 4.805227 (-4.226324) | 0.124306 / 6.500664 (-6.376358) | 0.061572 / 0.075469 (-0.013897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254692 / 1.841788 (-0.587096) | 18.414049 / 8.074308 (10.339741) | 13.992059 / 10.191392 (3.800667) | 0.146671 / 0.680424 (-0.533753) | 0.016925 / 0.534201 (-0.517275) | 0.333124 / 0.579283 (-0.246159) | 0.348007 / 0.434364 (-0.086357) | 0.378519 / 0.540337 (-0.161819) | 0.532540 / 1.386936 (-0.854396) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003614 / 0.011008 (-0.007394) | 0.061707 / 0.038508 (0.023199) | 0.062874 / 0.023109 (0.039765) | 0.364760 / 0.275898 (0.088862) | 0.398136 / 0.323480 (0.074656) | 0.005598 / 0.007986 (-0.002388) | 0.002836 / 0.004328 (-0.001493) | 0.061880 / 0.004250 (0.057630) | 0.048165 / 0.037052 (0.011113) | 0.372656 / 0.258489 (0.114167) | 0.403967 / 0.293841 (0.110126) | 0.027046 / 0.128546 (-0.101501) | 0.008091 / 0.075646 (-0.067555) | 0.066783 / 0.419271 (-0.352489) | 0.041186 / 0.043533 (-0.002347) | 0.376009 / 0.255139 (0.120870) | 0.391769 / 0.283200 (0.108569) | 0.021020 / 0.141683 (-0.120663) | 1.514593 / 1.452155 (0.062438) | 1.548506 / 1.492716 (0.055790) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237610 / 0.018006 (0.219604) | 0.434274 / 0.000490 (0.433784) | 0.009720 / 0.000200 (0.009520) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025605 / 0.037411 (-0.011807) | 0.078971 / 0.014526 (0.064445) | 0.088154 / 0.176557 (-0.088403) | 0.139112 / 0.737135 (-0.598023) | 0.088890 / 0.296338 (-0.207449) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420027 / 0.215209 (0.204818) | 4.189493 / 2.077655 (2.111838) | 2.143907 / 1.504120 (0.639787) | 1.967032 / 1.541195 (0.425837) | 2.011845 / 1.468490 (0.543355) | 0.496692 / 4.584777 (-4.088085) | 3.025456 / 3.745712 (-0.720256) | 2.828436 / 5.269862 (-2.441426) | 1.860673 / 4.565676 (-2.705003) | 0.057199 / 0.424275 (-0.367076) | 0.006770 / 0.007607 (-0.000838) | 0.491281 / 0.226044 (0.265236) | 4.918065 / 2.268929 (2.649136) | 2.593172 / 55.444624 (-52.851452) | 2.250750 / 6.876477 (-4.625727) | 2.406235 / 2.142072 (0.264162) | 0.588648 / 4.805227 (-4.216579) | 0.125635 / 6.500664 (-6.375029) | 0.061697 / 0.075469 (-0.013773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374065 / 1.841788 (-0.467722) | 18.439315 / 8.074308 (10.365007) | 14.031660 / 10.191392 (3.840268) | 0.153665 / 0.680424 (-0.526759) | 0.016980 / 0.534201 (-0.517221) | 0.331799 / 0.579283 (-0.247484) | 0.343201 / 0.434364 (-0.091163) | 0.392445 / 0.540337 (-0.147892) | 0.530387 / 1.386936 (-0.856549) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008189 / 0.011353 (-0.003164) | 0.004598 / 0.011008 (-0.006410) | 0.102199 / 0.038508 (0.063691) | 0.077961 / 0.023109 (0.054852) | 0.364936 / 0.275898 (0.089038) | 0.402606 / 0.323480 (0.079126) | 0.005522 / 0.007986 (-0.002464) | 0.004007 / 0.004328 (-0.000322) | 0.071560 / 0.004250 (0.067310) | 0.055818 / 0.037052 (0.018765) | 0.378394 / 0.258489 (0.119905) | 0.428990 / 0.293841 (0.135149) | 0.043142 / 0.128546 (-0.085404) | 0.013254 / 0.075646 (-0.062392) | 0.331102 / 0.419271 (-0.088170) | 0.061407 / 0.043533 (0.017875) | 0.387397 / 0.255139 (0.132258) | 0.416062 / 0.283200 (0.132862) | 0.036330 / 0.141683 (-0.105353) | 1.735352 / 1.452155 (0.283198) | 1.773329 / 1.492716 (0.280613) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188587 / 0.018006 (0.170581) | 0.519506 / 0.000490 (0.519016) | 0.004702 / 0.000200 (0.004502) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027152 / 0.037411 (-0.010260) | 0.094296 / 0.014526 (0.079770) | 0.098155 / 0.176557 (-0.078402) | 0.162541 / 0.737135 (-0.574595) | 0.112092 / 0.296338 (-0.184246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.537555 / 0.215209 (0.322346) | 5.486821 / 2.077655 (3.409166) | 2.377127 / 1.504120 (0.873008) | 2.073205 / 1.541195 (0.532011) | 2.075130 / 1.468490 (0.606640) | 0.783779 / 4.584777 (-3.800998) | 5.029524 / 3.745712 (1.283812) | 4.382724 / 5.269862 (-0.887138) | 2.836180 / 4.565676 (-1.729496) | 0.108840 / 0.424275 (-0.315435) | 0.008123 / 0.007607 (0.000516) | 0.673460 / 0.226044 (0.447416) | 6.674030 / 2.268929 (4.405102) | 3.208922 / 55.444624 (-52.235702) | 2.464908 / 6.876477 (-4.411568) | 2.661929 / 2.142072 (0.519856) | 0.962529 / 4.805227 (-3.842698) | 0.197974 / 6.500664 (-6.302690) | 0.066656 / 0.075469 (-0.008813) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430373 / 1.841788 (-0.411415) | 21.180540 / 8.074308 (13.106232) | 19.027491 / 10.191392 (8.836099) | 0.217520 / 0.680424 (-0.462904) | 0.028038 / 0.534201 (-0.506163) | 0.435266 / 0.579283 (-0.144017) | 0.529510 / 0.434364 (0.095147) | 0.511011 / 0.540337 (-0.029327) | 0.728940 / 1.386936 (-0.657996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007883 / 0.011353 (-0.003470) | 0.004448 / 0.011008 (-0.006560) | 0.071350 / 0.038508 (0.032842) | 0.075269 / 0.023109 (0.052160) | 0.396705 / 0.275898 (0.120807) | 0.457809 / 0.323480 (0.134329) | 0.005193 / 0.007986 (-0.002792) | 0.003695 / 0.004328 (-0.000633) | 0.078087 / 0.004250 (0.073836) | 0.054276 / 0.037052 (0.017224) | 0.412184 / 0.258489 (0.153695) | 0.452400 / 0.293841 (0.158559) | 0.049762 / 0.128546 (-0.078784) | 0.013206 / 0.075646 (-0.062440) | 0.085985 / 0.419271 (-0.333287) | 0.058837 / 0.043533 (0.015304) | 0.432481 / 0.255139 (0.177342) | 0.433260 / 0.283200 (0.150060) | 0.031190 / 0.141683 (-0.110493) | 1.582707 / 1.452155 (0.130552) | 1.664457 / 1.492716 (0.171741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223639 / 0.018006 (0.205633) | 0.524388 / 0.000490 (0.523899) | 0.005489 / 0.000200 (0.005289) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030182 / 0.037411 (-0.007230) | 0.089309 / 0.014526 (0.074783) | 0.103306 / 0.176557 (-0.073250) | 0.162624 / 0.737135 (-0.574511) | 0.108957 / 0.296338 (-0.187381) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577423 / 0.215209 (0.362214) | 5.900154 / 2.077655 (3.822500) | 2.687369 / 1.504120 (1.183249) | 2.513061 / 1.541195 (0.971866) | 2.506453 / 1.468490 (1.037963) | 0.830838 / 4.584777 (-3.753939) | 5.032195 / 3.745712 (1.286483) | 4.396827 / 5.269862 (-0.873035) | 2.884230 / 4.565676 (-1.681447) | 0.102239 / 0.424275 (-0.322036) | 0.008178 / 0.007607 (0.000571) | 0.710027 / 0.226044 (0.483983) | 7.149626 / 2.268929 (4.880698) | 3.403605 / 55.444624 (-52.041019) | 2.661970 / 6.876477 (-4.214506) | 2.760227 / 2.142072 (0.618154) | 1.043981 / 4.805227 (-3.761246) | 0.195028 / 6.500664 (-6.305636) | 0.065211 / 0.075469 (-0.010258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581265 / 1.841788 (-0.260522) | 21.640230 / 8.074308 (13.565922) | 19.031860 / 10.191392 (8.840468) | 0.196903 / 0.680424 (-0.483520) | 0.027061 / 0.534201 (-0.507140) | 0.444995 / 0.579283 (-0.134288) | 0.528195 / 0.434364 (0.093831) | 0.521540 / 0.540337 (-0.018797) | 0.730204 / 1.386936 (-0.656732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n" ]
2023-08-03T10:18:32
2023-08-03T15:08:02
2023-08-03T10:24:57
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6115", "html_url": "https://github.com/huggingface/datasets/pull/6115", "diff_url": "https://github.com/huggingface/datasets/pull/6115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6115.patch", "merged_at": "2023-08-03T10:24:57" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6115/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6114/comments
https://api.github.com/repos/huggingface/datasets/issues/6114/events
https://github.com/huggingface/datasets/issues/6114
1,834,015,584
I_kwDODunzps5tUNtg
6,114
Cache not being used when loading commonvoice 8.0.0
{ "login": "clabornd", "id": 31082141, "node_id": "MDQ6VXNlcjMxMDgyMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clabornd", "html_url": "https://github.com/clabornd", "followers_url": "https://api.github.com/users/clabornd/followers", "following_url": "https://api.github.com/users/clabornd/following{/other_user}", "gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}", "starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clabornd/subscriptions", "organizations_url": "https://api.github.com/users/clabornd/orgs", "repos_url": "https://api.github.com/users/clabornd/repos", "events_url": "https://api.github.com/users/clabornd/events{/privacy}", "received_events_url": "https://api.github.com/users/clabornd/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T23:18:11
2023-08-04T17:33:11
null
NONE
null
null
null
### Describe the bug I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially: ``` dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>") ``` it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32` ### Steps to reproduce the bug Steps to reproduce the behavior: 1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` 2. dataset is updated by maintainers 3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` ### Expected behavior I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded? EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example: ``` load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en") > ... > File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str) 1937 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ... 1794 e = e.__context__ -> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info datasets==2.7.0 python==3.10.8 OS: AWS Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6114/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6113/comments
https://api.github.com/repos/huggingface/datasets/issues/6113/events
https://github.com/huggingface/datasets/issues/6113
1,833,854,030
I_kwDODunzps5tTmRO
6,113
load_dataset() fails with streamlit caching inside docker
{ "login": "fierval", "id": 987574, "node_id": "MDQ6VXNlcjk4NzU3NA==", "avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fierval", "html_url": "https://github.com/fierval", "followers_url": "https://api.github.com/users/fierval/followers", "following_url": "https://api.github.com/users/fierval/following{/other_user}", "gists_url": "https://api.github.com/users/fierval/gists{/gist_id}", "starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fierval/subscriptions", "organizations_url": "https://api.github.com/users/fierval/orgs", "repos_url": "https://api.github.com/users/fierval/repos", "events_url": "https://api.github.com/users/fierval/events{/privacy}", "received_events_url": "https://api.github.com/users/fierval/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T20:20:26
2023-08-02T20:20:26
null
NONE
null
null
null
### Describe the bug When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message: EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files Traceback: File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/home/user/app/app.py", line 62, in <module> dashboard() File "/home/user/app/app.py", line 47, in dashboard feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper return cached_func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__ return self._get_or_create_cached_value(args, kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value return self._handle_cache_miss(cache, value_key, func_args, func_kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss computed_value = self._info.func(*func_args, **func_kwargs) File "/home/user/app/hf_interface.py", line 16, in load_data hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module else get_data_patterns(base_path, download_config=self.download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None ### Steps to reproduce the bug ```python @st.cache_resource def load_data(repo_id: str, hf_token=None): """Load data from HuggingFace Hub """ hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"]) return hf_dataset ``` ### Expected behavior Expect to load. Note: works fine with datasets==2.13.1 ### Environment info datasets==2.14.2, Ubuntu bionic-based Docker container.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6113/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6112/comments
https://api.github.com/repos/huggingface/datasets/issues/6112/events
https://github.com/huggingface/datasets/issues/6112
1,833,693,299
I_kwDODunzps5tS_Bz
6,112
yaml error using push_to_hub with generated README.md
{ "login": "kevintee", "id": 1643887, "node_id": "MDQ6VXNlcjE2NDM4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevintee", "html_url": "https://github.com/kevintee", "followers_url": "https://api.github.com/users/kevintee/followers", "following_url": "https://api.github.com/users/kevintee/following{/other_user}", "gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevintee/subscriptions", "organizations_url": "https://api.github.com/users/kevintee/orgs", "repos_url": "https://api.github.com/users/kevintee/repos", "events_url": "https://api.github.com/users/kevintee/events{/privacy}", "received_events_url": "https://api.github.com/users/kevintee/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T18:21:21
2023-08-02T18:21:21
null
NONE
null
null
null
### Describe the bug When I construct a dataset with the following features: ``` features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) ``` and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error: ``` Traceback (most recent call last): File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module> build_dataset() File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset push_to_hub(dataset, "multitask_document_classification_dataset") File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub dataset.push_to_hub(f"looppayments/{dataset_name}", private=True) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub HfApi(endpoint=config.HF_ENDPOINT).upload_file( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file commit_info = self.create_commit( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit hf_raise_for_status(commit_resp, endpoint_name="commit") File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e) Bad request for commit endpoint: Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9) 7 | - 3 8 | - 224 9 | - 224 10 | dtype: float64 --------------^ 11 | - name: input_ids 12 | sequence: int64 ``` My guess is that the auto-generated yaml is unable to be parsed for some reason. ### Steps to reproduce the bug The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet: ``` from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value from PIL import Image from transformers import AutoProcessor features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) def preprocess_dataset(rows): # Get images images = [ Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"] ] encoding = processor( images, rows["tokens"], boxes=rows["bbox"], truncation=True, padding="max_length", ) encoding["tokens"] = rows["tokens"] return encoding dataset = dataset.map( preprocess_dataset, batched=True, batch_size=5, features=features, ) ``` ### Expected behavior Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error. ### Environment info - `datasets` version: 2.14.2 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6112/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6111/comments
https://api.github.com/repos/huggingface/datasets/issues/6111/events
https://github.com/huggingface/datasets/issues/6111
1,832,781,654
I_kwDODunzps5tPgdW
6,111
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
{ "login": "2catycm", "id": 41530341, "node_id": "MDQ6VXNlcjQxNTMwMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2catycm", "html_url": "https://github.com/2catycm", "followers_url": "https://api.github.com/users/2catycm/followers", "following_url": "https://api.github.com/users/2catycm/following{/other_user}", "gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}", "starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/2catycm/subscriptions", "organizations_url": "https://api.github.com/users/2catycm/orgs", "repos_url": "https://api.github.com/users/2catycm/repos", "events_url": "https://api.github.com/users/2catycm/events{/privacy}", "received_events_url": "https://api.github.com/users/2catycm/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-02T09:17:29
2023-08-02T09:17:29
null
NONE
null
null
null
### Describe the bug For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object. However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects. ### Steps to reproduce the bug Steps to reproduce the bug: 1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main 2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box: ```bash cd my_directory_absolute git lfs install git clone https://huggingface.co/datasets/cifar100 ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK. ``` 3. Write A python file to try to load the dataset ```python from datasets import load_dataset, load_from_disk dataset = load_from_disk("my_directory_absolute/cifar100") ``` Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead. 4. Then you will see the error reported: ```log --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[5], line 9 1 from datasets import load_dataset, load_from_disk ----> 9 dataset = load_from_disk("my_directory_absolute/cifar100") File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options) 2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) 2231 else: -> 2232 raise FileNotFoundError( 2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." 2234 ) FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory. ``` ### Expected behavior The dataset should be load successfully. ### Environment info ```bash datasets-cli env ``` -> results: ```txt Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.14.2 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6111/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6110/comments
https://api.github.com/repos/huggingface/datasets/issues/6110/events
https://github.com/huggingface/datasets/issues/6110
1,831,110,633
I_kwDODunzps5tJIfp
6,110
[BUG] Dataset initialized from in-memory data does not create cache.
{ "login": "MattYoon", "id": 57797966, "node_id": "MDQ6VXNlcjU3Nzk3OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MattYoon", "html_url": "https://github.com/MattYoon", "followers_url": "https://api.github.com/users/MattYoon/followers", "following_url": "https://api.github.com/users/MattYoon/following{/other_user}", "gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions", "organizations_url": "https://api.github.com/users/MattYoon/orgs", "repos_url": "https://api.github.com/users/MattYoon/repos", "events_url": "https://api.github.com/users/MattYoon/events{/privacy}", "received_events_url": "https://api.github.com/users/MattYoon/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-01T11:58:58
2023-08-01T12:04:57
null
NONE
null
null
null
### Describe the bug `Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`. ### Steps to reproduce the bug ```python # below code was run the second time so the map function can be loaded from cache if exists from datasets import load_dataset, Dataset dataset = load_dataset("tatsu-lab/alpaca")['train'] dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map print(len(dataset.cache_files)) # 1 # copy the exact same data but initialize from a dictionary memory_dataset = Dataset.from_dict({ 'instruction': dataset['instruction'], 'input': dataset['input'], 'output': dataset['output'], 'text': dataset['text']}) memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map print(len(memory_dataset.cache_files)) # Map: 100%|██████████| 52002[/52002] # 0 ``` ### Expected behavior The `map` function should create cache regardless of the method the `Dataset` was created. ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6110/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6109/comments
https://api.github.com/repos/huggingface/datasets/issues/6109/events
https://github.com/huggingface/datasets/issues/6109
1,830,753,793
I_kwDODunzps5tHxYB
6,109
Problems in downloading Amazon reviews from HF
{ "login": "610v4nn1", "id": 52964960, "node_id": "MDQ6VXNlcjUyOTY0OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/610v4nn1", "html_url": "https://github.com/610v4nn1", "followers_url": "https://api.github.com/users/610v4nn1/followers", "following_url": "https://api.github.com/users/610v4nn1/following{/other_user}", "gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}", "starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions", "organizations_url": "https://api.github.com/users/610v4nn1/orgs", "repos_url": "https://api.github.com/users/610v4nn1/repos", "events_url": "https://api.github.com/users/610v4nn1/events{/privacy}", "received_events_url": "https://api.github.com/users/610v4nn1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\nSee: https://huggingface.co/datasets/amazon_reviews_multi/discussions/4#64c3898db63057f1fd3ce1a0 " ]
2023-08-01T08:38:29
2023-08-02T07:12:07
2023-08-02T07:12:07
NONE
null
null
null
### Describe the bug I have a script downloading `amazon_reviews_multi`. When the download starts, I get ``` Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.43MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 928kB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.81MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s] Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s] ``` the file is clearly too small to contain the requested dataset, in fact it contains en error message: ``` <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error> ``` obviously the script fails: ``` > raise DatasetGenerationError("An error occurred while generating the dataset") from e E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE") ### Expected behavior I would expect the dataset to be downloaded and processed ### Environment info * The problem is present with both datasets 2.12.0 and 2.14.2 * python version 3.10.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6109/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6108/comments
https://api.github.com/repos/huggingface/datasets/issues/6108/events
https://github.com/huggingface/datasets/issues/6108
1,830,347,187
I_kwDODunzps5tGOGz
6,108
Loading local datasets got strangely stuck
{ "login": "LoveCatc", "id": 48412571, "node_id": "MDQ6VXNlcjQ4NDEyNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LoveCatc", "html_url": "https://github.com/LoveCatc", "followers_url": "https://api.github.com/users/LoveCatc/followers", "following_url": "https://api.github.com/users/LoveCatc/following{/other_user}", "gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}", "starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions", "organizations_url": "https://api.github.com/users/LoveCatc/orgs", "repos_url": "https://api.github.com/users/LoveCatc/repos", "events_url": "https://api.github.com/users/LoveCatc/events{/privacy}", "received_events_url": "https://api.github.com/users/LoveCatc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.", "I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G." ]
2023-08-01T02:28:06
2023-08-03T12:03:30
null
NONE
null
null
null
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train'] ``` However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way: ```python dlist = list() for _ in LIST_OF_FILE_PATHS: dlist.append(load_dataset("json", data_files=_)['train']) ds = concatenate_datasets(dlist) ``` I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error: ```bash ^C Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker task = get() File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get res = self._reader.recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt Generating train split: 92431 examples [01:23, 1104.25 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module> a = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get raise TimeoutError multiprocess.context.TimeoutError ``` I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. Thanks for your efforts and patience! Any suggestion or help would be appreciated. ### Steps to reproduce the bug 1. use load_dataset() with `data_files = LIST_OF_FILES` ### Expected behavior All the files should be smoothly loaded. ### Environment info - Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked. - `datasets` version: 2.14.2 - Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6108/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6107/comments
https://api.github.com/repos/huggingface/datasets/issues/6107/events
https://github.com/huggingface/datasets/pull/6107
1,829,625,320
PR_kwDODunzps5W0rLR
6,107
Fix deprecation of use_auth_token in file_utils
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007678 / 0.011353 (-0.003675) | 0.004233 / 0.011008 (-0.006776) | 0.095934 / 0.038508 (0.057426) | 0.064201 / 0.023109 (0.041092) | 0.345765 / 0.275898 (0.069867) | 0.383089 / 0.323480 (0.059609) | 0.004084 / 0.007986 (-0.003902) | 0.003311 / 0.004328 (-0.001017) | 0.072367 / 0.004250 (0.068117) | 0.048252 / 0.037052 (0.011200) | 0.338340 / 0.258489 (0.079851) | 0.391627 / 0.293841 (0.097786) | 0.045203 / 0.128546 (-0.083343) | 0.013494 / 0.075646 (-0.062153) | 0.314097 / 0.419271 (-0.105174) | 0.058183 / 0.043533 (0.014650) | 0.353946 / 0.255139 (0.098807) | 0.385181 / 0.283200 (0.101981) | 0.033111 / 0.141683 (-0.108572) | 1.578489 / 1.452155 (0.126335) | 1.631660 / 1.492716 (0.138944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202592 / 0.018006 (0.184586) | 0.506450 / 0.000490 (0.505961) | 0.004630 / 0.000200 (0.004430) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024761 / 0.037411 (-0.012651) | 0.086295 / 0.014526 (0.071769) | 0.094063 / 0.176557 (-0.082494) | 0.154189 / 0.737135 (-0.582947) | 0.096273 / 0.296338 (-0.200065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581731 / 0.215209 (0.366522) | 5.552020 / 2.077655 (3.474365) | 2.430800 / 1.504120 (0.926680) | 2.130864 / 1.541195 (0.589669) | 2.092802 / 1.468490 (0.624312) | 0.833956 / 4.584777 (-3.750821) | 4.840859 / 3.745712 (1.095147) | 4.267812 / 5.269862 (-1.002050) | 2.663245 / 4.565676 (-1.902432) | 0.093195 / 0.424275 (-0.331080) | 0.007942 / 0.007607 (0.000335) | 0.651457 / 0.226044 (0.425413) | 6.782986 / 2.268929 (4.514058) | 3.103307 / 55.444624 (-52.341318) | 2.373933 / 6.876477 (-4.502544) | 2.571613 / 2.142072 (0.429540) | 0.981389 / 4.805227 (-3.823839) | 0.199019 / 6.500664 (-6.301645) | 0.065828 / 0.075469 (-0.009641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429778 / 1.841788 (-0.412009) | 20.967563 / 8.074308 (12.893255) | 19.329723 / 10.191392 (9.138331) | 0.222048 / 0.680424 (-0.458376) | 0.033507 / 0.534201 (-0.500694) | 0.436801 / 0.579283 (-0.142482) | 0.530197 / 0.434364 (0.095833) | 0.491532 / 0.540337 (-0.048805) | 0.718216 / 1.386936 (-0.668720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007798 / 0.011353 (-0.003555) | 0.004748 / 0.011008 (-0.006260) | 0.070847 / 0.038508 (0.032339) | 0.069338 / 0.023109 (0.046229) | 0.400890 / 0.275898 (0.124992) | 0.429482 / 0.323480 (0.106002) | 0.006469 / 0.007986 (-0.001517) | 0.003514 / 0.004328 (-0.000814) | 0.069049 / 0.004250 (0.064798) | 0.059800 / 0.037052 (0.022748) | 0.415644 / 0.258489 (0.157155) | 0.432562 / 0.293841 (0.138721) | 0.043778 / 0.128546 (-0.084768) | 0.015141 / 0.075646 (-0.060506) | 0.081521 / 0.419271 (-0.337750) | 0.054692 / 0.043533 (0.011160) | 0.404497 / 0.255139 (0.149358) | 0.419783 / 0.283200 (0.136583) | 0.029588 / 0.141683 (-0.112094) | 1.593506 / 1.452155 (0.141351) | 1.615977 / 1.492716 (0.123261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270981 / 0.018006 (0.252975) | 0.522074 / 0.000490 (0.521584) | 0.026568 / 0.000200 (0.026368) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031551 / 0.037411 (-0.005861) | 0.086723 / 0.014526 (0.072197) | 0.103315 / 0.176557 (-0.073242) | 0.154692 / 0.737135 (-0.582443) | 0.099472 / 0.296338 (-0.196866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570238 / 0.215209 (0.355029) | 5.655963 / 2.077655 (3.578308) | 2.662670 / 1.504120 (1.158550) | 2.380903 / 1.541195 (0.839709) | 2.409467 / 1.468490 (0.940977) | 0.828055 / 4.584777 (-3.756722) | 4.964698 / 3.745712 (1.218986) | 4.299995 / 5.269862 (-0.969867) | 2.824162 / 4.565676 (-1.741514) | 0.095872 / 0.424275 (-0.328403) | 0.007907 / 0.007607 (0.000300) | 0.701595 / 0.226044 (0.475551) | 7.131965 / 2.268929 (4.863036) | 3.250554 / 55.444624 (-52.194070) | 2.531916 / 6.876477 (-4.344561) | 2.717908 / 2.142072 (0.575835) | 1.014479 / 4.805227 (-3.790748) | 0.223804 / 6.500664 (-6.276861) | 0.071893 / 0.075469 (-0.003576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541702 / 1.841788 (-0.300086) | 21.668219 / 8.074308 (13.593911) | 18.916032 / 10.191392 (8.724640) | 0.205915 / 0.680424 (-0.474508) | 0.026356 / 0.534201 (-0.507845) | 0.429122 / 0.579283 (-0.150161) | 0.506110 / 0.434364 (0.071746) | 0.510148 / 0.540337 (-0.030190) | 0.724699 / 1.386936 (-0.662237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4ca93ff86551b398c979862e7be7305725a240b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006884 / 0.011353 (-0.004469) | 0.004492 / 0.011008 (-0.006516) | 0.085439 / 0.038508 (0.046931) | 0.083905 / 0.023109 (0.060796) | 0.313604 / 0.275898 (0.037706) | 0.354683 / 0.323480 (0.031203) | 0.006535 / 0.007986 (-0.001451) | 0.004318 / 0.004328 (-0.000011) | 0.066129 / 0.004250 (0.061879) | 0.057568 / 0.037052 (0.020516) | 0.317162 / 0.258489 (0.058672) | 0.372501 / 0.293841 (0.078660) | 0.031059 / 0.128546 (-0.097488) | 0.009013 / 0.075646 (-0.066634) | 0.288794 / 0.419271 (-0.130478) | 0.053326 / 0.043533 (0.009793) | 0.314318 / 0.255139 (0.059179) | 0.357505 / 0.283200 (0.074305) | 0.027020 / 0.141683 (-0.114663) | 1.530653 / 1.452155 (0.078498) | 1.599782 / 1.492716 (0.107066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278788 / 0.018006 (0.260782) | 0.626822 / 0.000490 (0.626333) | 0.003780 / 0.000200 (0.003580) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031703 / 0.037411 (-0.005708) | 0.085654 / 0.014526 (0.071128) | 0.754858 / 0.176557 (0.578301) | 0.212251 / 0.737135 (-0.524885) | 0.171344 / 0.296338 (-0.124994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382291 / 0.215209 (0.167082) | 3.825612 / 2.077655 (1.747958) | 1.874553 / 1.504120 (0.370433) | 1.712574 / 1.541195 (0.171379) | 1.791479 / 1.468490 (0.322989) | 0.481005 / 4.584777 (-4.103772) | 3.530559 / 3.745712 (-0.215153) | 3.395305 / 5.269862 (-1.874557) | 2.133747 / 4.565676 (-2.431930) | 0.056139 / 0.424275 (-0.368136) | 0.007424 / 0.007607 (-0.000183) | 0.458321 / 0.226044 (0.232277) | 4.577665 / 2.268929 (2.308736) | 2.380233 / 55.444624 (-53.064392) | 2.004060 / 6.876477 (-4.872417) | 2.290712 / 2.142072 (0.148639) | 0.570157 / 4.805227 (-4.235070) | 0.131670 / 6.500664 (-6.368994) | 0.060684 / 0.075469 (-0.014785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294929 / 1.841788 (-0.546858) | 21.386663 / 8.074308 (13.312355) | 14.389440 / 10.191392 (4.198048) | 0.171177 / 0.680424 (-0.509247) | 0.018660 / 0.534201 (-0.515541) | 0.394385 / 0.579283 (-0.184898) | 0.424942 / 0.434364 (-0.009422) | 0.463618 / 0.540337 (-0.076719) | 0.651499 / 1.386936 (-0.735437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007079 / 0.011353 (-0.004274) | 0.004615 / 0.011008 (-0.006393) | 0.066300 / 0.038508 (0.027792) | 0.092636 / 0.023109 (0.069527) | 0.399080 / 0.275898 (0.123182) | 0.429873 / 0.323480 (0.106393) | 0.006689 / 0.007986 (-0.001297) | 0.004358 / 0.004328 (0.000029) | 0.067155 / 0.004250 (0.062905) | 0.064040 / 0.037052 (0.026988) | 0.399905 / 0.258489 (0.141416) | 0.448237 / 0.293841 (0.154397) | 0.031985 / 0.128546 (-0.096561) | 0.009053 / 0.075646 (-0.066593) | 0.071904 / 0.419271 (-0.347368) | 0.048759 / 0.043533 (0.005227) | 0.386797 / 0.255139 (0.131658) | 0.411240 / 0.283200 (0.128040) | 0.028568 / 0.141683 (-0.113115) | 1.501037 / 1.452155 (0.048882) | 1.594560 / 1.492716 (0.101844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300756 / 0.018006 (0.282750) | 0.631220 / 0.000490 (0.630730) | 0.010163 / 0.000200 (0.009963) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033716 / 0.037411 (-0.003695) | 0.093562 / 0.014526 (0.079037) | 0.106975 / 0.176557 (-0.069582) | 0.161919 / 0.737135 (-0.575216) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410392 / 0.215209 (0.195183) | 4.094411 / 2.077655 (2.016756) | 2.085868 / 1.504120 (0.581748) | 1.959589 / 1.541195 (0.418394) | 2.096683 / 1.468490 (0.628193) | 0.494593 / 4.584777 (-4.090184) | 3.854302 / 3.745712 (0.108590) | 3.742303 / 5.269862 (-1.527558) | 2.379983 / 4.565676 (-2.185693) | 0.058640 / 0.424275 (-0.365635) | 0.008092 / 0.007607 (0.000484) | 0.486957 / 0.226044 (0.260912) | 4.855784 / 2.268929 (2.586855) | 2.654029 / 55.444624 (-52.790595) | 2.237627 / 6.876477 (-4.638850) | 2.536955 / 2.142072 (0.394882) | 0.622398 / 4.805227 (-4.182829) | 0.139212 / 6.500664 (-6.361452) | 0.062805 / 0.075469 (-0.012664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374862 / 1.841788 (-0.466926) | 22.797015 / 8.074308 (14.722707) | 14.393995 / 10.191392 (4.202603) | 0.196603 / 0.680424 (-0.483821) | 0.018602 / 0.534201 (-0.515599) | 0.394568 / 0.579283 (-0.184715) | 0.408792 / 0.434364 (-0.025572) | 0.486706 / 0.540337 (-0.053631) | 0.652365 / 1.386936 (-0.734571) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5713299a88f527ea162a099c2bf2cbceada8fb86 \"CML watermark\")\n" ]
2023-07-31T16:32:01
2023-08-03T10:13:32
2023-08-03T10:04:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6107", "html_url": "https://github.com/huggingface/datasets/pull/6107", "diff_url": "https://github.com/huggingface/datasets/pull/6107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6107.patch", "merged_at": "2023-08-03T10:04:18" }
Fix issues with the deprecation of `use_auth_token` introduced by: - #5996 in functions: - `get_authentication_headers_for_url` - `request_etag` - `get_from_cache` Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588 ``` FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' ``` Related to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6107/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6106/comments
https://api.github.com/repos/huggingface/datasets/issues/6106/events
https://github.com/huggingface/datasets/issues/6106
1,829,131,223
I_kwDODunzps5tBlPX
6,106
load local json_file as dataset
{ "login": "CiaoHe", "id": 39040787, "node_id": "MDQ6VXNlcjM5MDQwNzg3", "avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CiaoHe", "html_url": "https://github.com/CiaoHe", "followers_url": "https://api.github.com/users/CiaoHe/followers", "following_url": "https://api.github.com/users/CiaoHe/following{/other_user}", "gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}", "starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions", "organizations_url": "https://api.github.com/users/CiaoHe/orgs", "repos_url": "https://api.github.com/users/CiaoHe/repos", "events_url": "https://api.github.com/users/CiaoHe/events{/privacy}", "received_events_url": "https://api.github.com/users/CiaoHe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-31T12:53:49
2023-07-31T12:53:49
null
NONE
null
null
null
### Describe the bug I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type. ### Steps to reproduce the bug 1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)` 2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double ### Expected behavior Should allow some columns are 'float' type, at least it should convert those columns to str type. I tried to avoid the error by naively convert the float item to str: ```python # if col type is not str, we need to convert it to str mapping = {} for col in keys: if isinstance(dataset[0][col], str): mapping[col] = [row.get(col) for row in dataset] else: mapping[col] = [str(row.get(col)) for row in dataset] ``` ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6106/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6105/comments
https://api.github.com/repos/huggingface/datasets/issues/6105/events
https://github.com/huggingface/datasets/pull/6105
1,829,008,430
PR_kwDODunzps5WyiJD
6,105
Fix error when loading from GCP bucket
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006706 / 0.011353 (-0.004647) | 0.004016 / 0.011008 (-0.006992) | 0.083696 / 0.038508 (0.045188) | 0.074340 / 0.023109 (0.051230) | 0.327338 / 0.275898 (0.051440) | 0.366663 / 0.323480 (0.043183) | 0.004052 / 0.007986 (-0.003934) | 0.003423 / 0.004328 (-0.000906) | 0.064576 / 0.004250 (0.060326) | 0.055037 / 0.037052 (0.017985) | 0.325089 / 0.258489 (0.066600) | 0.379986 / 0.293841 (0.086145) | 0.031614 / 0.128546 (-0.096932) | 0.008553 / 0.075646 (-0.067094) | 0.287430 / 0.419271 (-0.131841) | 0.053032 / 0.043533 (0.009499) | 0.318990 / 0.255139 (0.063851) | 0.364426 / 0.283200 (0.081226) | 0.024926 / 0.141683 (-0.116757) | 1.461835 / 1.452155 (0.009680) | 1.557172 / 1.492716 (0.064456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212430 / 0.018006 (0.194424) | 0.512891 / 0.000490 (0.512402) | 0.004772 / 0.000200 (0.004572) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027873 / 0.037411 (-0.009538) | 0.085598 / 0.014526 (0.071072) | 0.097330 / 0.176557 (-0.079226) | 0.152235 / 0.737135 (-0.584900) | 0.097787 / 0.296338 (-0.198552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384645 / 0.215209 (0.169436) | 3.841161 / 2.077655 (1.763506) | 1.863696 / 1.504120 (0.359577) | 1.685082 / 1.541195 (0.143887) | 1.772904 / 1.468490 (0.304414) | 0.480177 / 4.584777 (-4.104599) | 3.601537 / 3.745712 (-0.144175) | 3.273647 / 5.269862 (-1.996214) | 2.014415 / 4.565676 (-2.551261) | 0.056668 / 0.424275 (-0.367607) | 0.007257 / 0.007607 (-0.000350) | 0.458194 / 0.226044 (0.232150) | 4.577311 / 2.268929 (2.308382) | 2.333983 / 55.444624 (-53.110641) | 1.964508 / 6.876477 (-4.911969) | 2.193379 / 2.142072 (0.051307) | 0.577557 / 4.805227 (-4.227670) | 0.133899 / 6.500664 (-6.366765) | 0.060804 / 0.075469 (-0.014665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249490 / 1.841788 (-0.592298) | 19.791875 / 8.074308 (11.717567) | 14.418728 / 10.191392 (4.227336) | 0.167788 / 0.680424 (-0.512636) | 0.018993 / 0.534201 (-0.515208) | 0.396141 / 0.579283 (-0.183142) | 0.412427 / 0.434364 (-0.021937) | 0.456718 / 0.540337 (-0.083619) | 0.641383 / 1.386936 (-0.745553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006546 / 0.011353 (-0.004807) | 0.004059 / 0.011008 (-0.006949) | 0.064523 / 0.038508 (0.026015) | 0.074988 / 0.023109 (0.051878) | 0.388932 / 0.275898 (0.113034) | 0.424496 / 0.323480 (0.101016) | 0.005226 / 0.007986 (-0.002760) | 0.003409 / 0.004328 (-0.000920) | 0.064284 / 0.004250 (0.060034) | 0.056829 / 0.037052 (0.019777) | 0.386457 / 0.258489 (0.127968) | 0.428063 / 0.293841 (0.134222) | 0.031411 / 0.128546 (-0.097136) | 0.008577 / 0.075646 (-0.067070) | 0.070357 / 0.419271 (-0.348915) | 0.048920 / 0.043533 (0.005388) | 0.385197 / 0.255139 (0.130058) | 0.407167 / 0.283200 (0.123967) | 0.024469 / 0.141683 (-0.117214) | 1.482733 / 1.452155 (0.030578) | 1.539027 / 1.492716 (0.046311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227532 / 0.018006 (0.209526) | 0.448792 / 0.000490 (0.448302) | 0.004139 / 0.000200 (0.003939) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031004 / 0.037411 (-0.006408) | 0.088163 / 0.014526 (0.073637) | 0.101452 / 0.176557 (-0.075105) | 0.152907 / 0.737135 (-0.584229) | 0.102325 / 0.296338 (-0.194014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418092 / 0.215209 (0.202883) | 4.162277 / 2.077655 (2.084623) | 2.232987 / 1.504120 (0.728867) | 2.143583 / 1.541195 (0.602388) | 2.246142 / 1.468490 (0.777652) | 0.490181 / 4.584777 (-4.094596) | 3.631514 / 3.745712 (-0.114198) | 3.315025 / 5.269862 (-1.954837) | 2.101853 / 4.565676 (-2.463823) | 0.057905 / 0.424275 (-0.366370) | 0.007686 / 0.007607 (0.000079) | 0.489965 / 0.226044 (0.263921) | 4.894375 / 2.268929 (2.625447) | 2.655459 / 55.444624 (-52.789165) | 2.262211 / 6.876477 (-4.614266) | 2.505335 / 2.142072 (0.363263) | 0.591329 / 4.805227 (-4.213898) | 0.133554 / 6.500664 (-6.367110) | 0.061922 / 0.075469 (-0.013547) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347483 / 1.841788 (-0.494304) | 20.027011 / 8.074308 (11.952703) | 14.430737 / 10.191392 (4.239345) | 0.165767 / 0.680424 (-0.514657) | 0.018460 / 0.534201 (-0.515741) | 0.393790 / 0.579283 (-0.185494) | 0.407213 / 0.434364 (-0.027151) | 0.474459 / 0.540337 (-0.065879) | 0.635054 / 1.386936 (-0.751882) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7f575111481e2e2f4d4fc9180771797f69ebcc44 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007652 / 0.011353 (-0.003701) | 0.004581 / 0.011008 (-0.006427) | 0.101629 / 0.038508 (0.063121) | 0.090233 / 0.023109 (0.067124) | 0.392789 / 0.275898 (0.116891) | 0.432163 / 0.323480 (0.108683) | 0.004694 / 0.007986 (-0.003292) | 0.003927 / 0.004328 (-0.000401) | 0.076533 / 0.004250 (0.072282) | 0.064442 / 0.037052 (0.027390) | 0.397539 / 0.258489 (0.139050) | 0.441323 / 0.293841 (0.147482) | 0.036278 / 0.128546 (-0.092268) | 0.009810 / 0.075646 (-0.065836) | 0.343537 / 0.419271 (-0.075734) | 0.060273 / 0.043533 (0.016740) | 0.395023 / 0.255139 (0.139884) | 0.427210 / 0.283200 (0.144011) | 0.031717 / 0.141683 (-0.109966) | 1.771221 / 1.452155 (0.319066) | 1.896336 / 1.492716 (0.403620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235081 / 0.018006 (0.217075) | 0.512781 / 0.000490 (0.512292) | 0.004920 / 0.000200 (0.004721) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033525 / 0.037411 (-0.003887) | 0.104416 / 0.014526 (0.089890) | 0.115695 / 0.176557 (-0.060861) | 0.182216 / 0.737135 (-0.554919) | 0.116259 / 0.296338 (-0.180079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454817 / 0.215209 (0.239608) | 4.527753 / 2.077655 (2.450098) | 2.222273 / 1.504120 (0.718153) | 2.038448 / 1.541195 (0.497253) | 2.179444 / 1.468490 (0.710953) | 0.573665 / 4.584777 (-4.011112) | 4.504943 / 3.745712 (0.759231) | 3.848435 / 5.269862 (-1.421427) | 2.455185 / 4.565676 (-2.110491) | 0.067985 / 0.424275 (-0.356290) | 0.008719 / 0.007607 (0.001112) | 0.552405 / 0.226044 (0.326360) | 5.515251 / 2.268929 (3.246322) | 2.851557 / 55.444624 (-52.593067) | 2.463070 / 6.876477 (-4.413407) | 2.761596 / 2.142072 (0.619524) | 0.688561 / 4.805227 (-4.116667) | 0.159946 / 6.500664 (-6.340718) | 0.075435 / 0.075469 (-0.000034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505178 / 1.841788 (-0.336610) | 23.555236 / 8.074308 (15.480928) | 17.272759 / 10.191392 (7.081367) | 0.206495 / 0.680424 (-0.473928) | 0.021869 / 0.534201 (-0.512332) | 0.469271 / 0.579283 (-0.110012) | 0.469200 / 0.434364 (0.034837) | 0.542437 / 0.540337 (0.002100) | 0.792864 / 1.386936 (-0.594072) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008151 / 0.011353 (-0.003202) | 0.004992 / 0.011008 (-0.006016) | 0.079545 / 0.038508 (0.041037) | 0.100234 / 0.023109 (0.077125) | 0.492791 / 0.275898 (0.216893) | 0.511315 / 0.323480 (0.187835) | 0.006878 / 0.007986 (-0.001108) | 0.003807 / 0.004328 (-0.000522) | 0.080876 / 0.004250 (0.076625) | 0.076734 / 0.037052 (0.039681) | 0.518247 / 0.258489 (0.259758) | 0.524202 / 0.293841 (0.230361) | 0.039896 / 0.128546 (-0.088650) | 0.016581 / 0.075646 (-0.059065) | 0.101228 / 0.419271 (-0.318043) | 0.061990 / 0.043533 (0.018457) | 0.490611 / 0.255139 (0.235472) | 0.514930 / 0.283200 (0.231730) | 0.028680 / 0.141683 (-0.113002) | 1.966215 / 1.452155 (0.514061) | 2.047757 / 1.492716 (0.555040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286807 / 0.018006 (0.268801) | 0.506448 / 0.000490 (0.505959) | 0.005867 / 0.000200 (0.005667) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037141 / 0.037411 (-0.000270) | 0.113232 / 0.014526 (0.098706) | 0.121201 / 0.176557 (-0.055356) | 0.185472 / 0.737135 (-0.551663) | 0.122896 / 0.296338 (-0.173442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514491 / 0.215209 (0.299282) | 4.942457 / 2.077655 (2.864802) | 2.533519 / 1.504120 (1.029399) | 2.371011 / 1.541195 (0.829817) | 2.495604 / 1.468490 (1.027114) | 0.576224 / 4.584777 (-4.008553) | 4.368584 / 3.745712 (0.622872) | 3.885598 / 5.269862 (-1.384263) | 2.443596 / 4.565676 (-2.122080) | 0.068905 / 0.424275 (-0.355371) | 0.009171 / 0.007607 (0.001564) | 0.584977 / 0.226044 (0.358932) | 5.835220 / 2.268929 (3.566291) | 3.189037 / 55.444624 (-52.255588) | 2.753228 / 6.876477 (-4.123249) | 3.009062 / 2.142072 (0.866990) | 0.690179 / 4.805227 (-4.115048) | 0.157981 / 6.500664 (-6.342683) | 0.074518 / 0.075469 (-0.000951) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599907 / 1.841788 (-0.241880) | 23.853903 / 8.074308 (15.779595) | 17.419796 / 10.191392 (7.228404) | 0.204974 / 0.680424 (-0.475450) | 0.022014 / 0.534201 (-0.512187) | 0.473379 / 0.579283 (-0.105905) | 0.461346 / 0.434364 (0.026982) | 0.564881 / 0.540337 (0.024543) | 0.752933 / 1.386936 (-0.634003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f49c9ca993fa600fae0e327636d52657328e7ffb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004805) | 0.004020 / 0.011008 (-0.006988) | 0.086828 / 0.038508 (0.048320) | 0.072924 / 0.023109 (0.049815) | 0.312847 / 0.275898 (0.036949) | 0.344605 / 0.323480 (0.021125) | 0.004117 / 0.007986 (-0.003868) | 0.004365 / 0.004328 (0.000037) | 0.066755 / 0.004250 (0.062505) | 0.053248 / 0.037052 (0.016195) | 0.315744 / 0.258489 (0.057255) | 0.362426 / 0.293841 (0.068585) | 0.030732 / 0.128546 (-0.097814) | 0.008516 / 0.075646 (-0.067130) | 0.289927 / 0.419271 (-0.129345) | 0.052115 / 0.043533 (0.008582) | 0.308026 / 0.255139 (0.052887) | 0.343115 / 0.283200 (0.059915) | 0.024131 / 0.141683 (-0.117551) | 1.464290 / 1.452155 (0.012135) | 1.559359 / 1.492716 (0.066642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216744 / 0.018006 (0.198738) | 0.473156 / 0.000490 (0.472666) | 0.004176 / 0.000200 (0.003977) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028500 / 0.037411 (-0.008911) | 0.083892 / 0.014526 (0.069366) | 0.131851 / 0.176557 (-0.044705) | 0.162202 / 0.737135 (-0.574933) | 0.127989 / 0.296338 (-0.168349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404555 / 0.215209 (0.189346) | 4.035989 / 2.077655 (1.958334) | 2.025174 / 1.504120 (0.521054) | 1.835785 / 1.541195 (0.294590) | 1.909819 / 1.468490 (0.441329) | 0.475352 / 4.584777 (-4.109425) | 3.548055 / 3.745712 (-0.197657) | 3.234782 / 5.269862 (-2.035080) | 2.010305 / 4.565676 (-2.555371) | 0.056507 / 0.424275 (-0.367768) | 0.007259 / 0.007607 (-0.000348) | 0.482021 / 0.226044 (0.255977) | 4.818559 / 2.268929 (2.549631) | 2.528765 / 55.444624 (-52.915860) | 2.159804 / 6.876477 (-4.716673) | 2.380640 / 2.142072 (0.238567) | 0.585005 / 4.805227 (-4.220222) | 0.133811 / 6.500664 (-6.366853) | 0.060686 / 0.075469 (-0.014783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260902 / 1.841788 (-0.580886) | 19.500215 / 8.074308 (11.425907) | 14.164698 / 10.191392 (3.973306) | 0.172492 / 0.680424 (-0.507932) | 0.018221 / 0.534201 (-0.515980) | 0.392609 / 0.579283 (-0.186674) | 0.423265 / 0.434364 (-0.011099) | 0.454705 / 0.540337 (-0.085633) | 0.639856 / 1.386936 (-0.747080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006656 / 0.011353 (-0.004697) | 0.003903 / 0.011008 (-0.007106) | 0.063780 / 0.038508 (0.025272) | 0.076848 / 0.023109 (0.053739) | 0.379429 / 0.275898 (0.103531) | 0.442554 / 0.323480 (0.119074) | 0.005327 / 0.007986 (-0.002658) | 0.003318 / 0.004328 (-0.001010) | 0.064307 / 0.004250 (0.060056) | 0.057183 / 0.037052 (0.020131) | 0.398163 / 0.258489 (0.139674) | 0.448532 / 0.293841 (0.154691) | 0.031322 / 0.128546 (-0.097224) | 0.008462 / 0.075646 (-0.067184) | 0.070354 / 0.419271 (-0.348917) | 0.048420 / 0.043533 (0.004887) | 0.368304 / 0.255139 (0.113165) | 0.428786 / 0.283200 (0.145587) | 0.023921 / 0.141683 (-0.117762) | 1.499281 / 1.452155 (0.047126) | 1.554448 / 1.492716 (0.061731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238830 / 0.018006 (0.220824) | 0.464196 / 0.000490 (0.463706) | 0.004812 / 0.000200 (0.004613) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031642 / 0.037411 (-0.005770) | 0.089205 / 0.014526 (0.074679) | 0.101577 / 0.176557 (-0.074980) | 0.154993 / 0.737135 (-0.582142) | 0.102935 / 0.296338 (-0.193403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415218 / 0.215209 (0.200009) | 4.137711 / 2.077655 (2.060056) | 2.128757 / 1.504120 (0.624637) | 1.961086 / 1.541195 (0.419891) | 2.047552 / 1.468490 (0.579061) | 0.486953 / 4.584777 (-4.097824) | 3.587851 / 3.745712 (-0.157861) | 3.280771 / 5.269862 (-1.989090) | 2.016980 / 4.565676 (-2.548697) | 0.057284 / 0.424275 (-0.366991) | 0.007705 / 0.007607 (0.000097) | 0.492242 / 0.226044 (0.266197) | 4.923213 / 2.268929 (2.654285) | 2.672528 / 55.444624 (-52.772097) | 2.292862 / 6.876477 (-4.583614) | 2.517410 / 2.142072 (0.375337) | 0.614798 / 4.805227 (-4.190429) | 0.149642 / 6.500664 (-6.351023) | 0.062898 / 0.075469 (-0.012571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323266 / 1.841788 (-0.518522) | 19.891504 / 8.074308 (11.817196) | 14.115069 / 10.191392 (3.923677) | 0.169859 / 0.680424 (-0.510564) | 0.018538 / 0.534201 (-0.515663) | 0.398456 / 0.579283 (-0.180827) | 0.410111 / 0.434364 (-0.024253) | 0.483198 / 0.540337 (-0.057139) | 0.639283 / 1.386936 (-0.747653) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#01e2194f2aab6aa98686a2069ee5201b69a53c14 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007731 / 0.011353 (-0.003622) | 0.004064 / 0.011008 (-0.006944) | 0.095261 / 0.038508 (0.056753) | 0.081594 / 0.023109 (0.058485) | 0.390413 / 0.275898 (0.114515) | 0.415542 / 0.323480 (0.092063) | 0.006031 / 0.007986 (-0.001954) | 0.003817 / 0.004328 (-0.000512) | 0.066381 / 0.004250 (0.062131) | 0.058262 / 0.037052 (0.021210) | 0.383626 / 0.258489 (0.125137) | 0.443237 / 0.293841 (0.149396) | 0.034358 / 0.128546 (-0.094188) | 0.010002 / 0.075646 (-0.065644) | 0.317472 / 0.419271 (-0.101800) | 0.057428 / 0.043533 (0.013895) | 0.393929 / 0.255139 (0.138790) | 0.444572 / 0.283200 (0.161373) | 0.026295 / 0.141683 (-0.115388) | 1.603639 / 1.452155 (0.151484) | 1.707750 / 1.492716 (0.215034) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222171 / 0.018006 (0.204165) | 0.491762 / 0.000490 (0.491272) | 0.003389 / 0.000200 (0.003189) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029420 / 0.037411 (-0.007991) | 0.086201 / 0.014526 (0.071676) | 0.100150 / 0.176557 (-0.076406) | 0.162338 / 0.737135 (-0.574797) | 0.099349 / 0.296338 (-0.196989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445976 / 0.215209 (0.230767) | 4.460197 / 2.077655 (2.382542) | 2.211767 / 1.504120 (0.707647) | 1.988740 / 1.541195 (0.447545) | 2.052289 / 1.468490 (0.583799) | 0.570321 / 4.584777 (-4.014456) | 4.148777 / 3.745712 (0.403065) | 3.750977 / 5.269862 (-1.518885) | 2.309443 / 4.565676 (-2.256234) | 0.064552 / 0.424275 (-0.359724) | 0.008167 / 0.007607 (0.000560) | 0.523283 / 0.226044 (0.297238) | 5.349347 / 2.268929 (3.080419) | 2.710292 / 55.444624 (-52.734332) | 2.344252 / 6.876477 (-4.532225) | 2.549903 / 2.142072 (0.407831) | 0.665942 / 4.805227 (-4.139285) | 0.154108 / 6.500664 (-6.346556) | 0.070181 / 0.075469 (-0.005289) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455733 / 1.841788 (-0.386054) | 21.846958 / 8.074308 (13.772650) | 15.133865 / 10.191392 (4.942473) | 0.199009 / 0.680424 (-0.481415) | 0.021299 / 0.534201 (-0.512902) | 0.421555 / 0.579283 (-0.157729) | 0.437639 / 0.434364 (0.003275) | 0.498568 / 0.540337 (-0.041769) | 0.719649 / 1.386936 (-0.667287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007858 / 0.011353 (-0.003495) | 0.004629 / 0.011008 (-0.006380) | 0.075701 / 0.038508 (0.037193) | 0.084425 / 0.023109 (0.061316) | 0.436650 / 0.275898 (0.160752) | 0.466046 / 0.323480 (0.142566) | 0.006042 / 0.007986 (-0.001944) | 0.003834 / 0.004328 (-0.000495) | 0.074729 / 0.004250 (0.070478) | 0.065983 / 0.037052 (0.028931) | 0.447239 / 0.258489 (0.188750) | 0.466728 / 0.293841 (0.172887) | 0.035814 / 0.128546 (-0.092733) | 0.009919 / 0.075646 (-0.065727) | 0.081151 / 0.419271 (-0.338120) | 0.057256 / 0.043533 (0.013723) | 0.435609 / 0.255139 (0.180470) | 0.448901 / 0.283200 (0.165701) | 0.026325 / 0.141683 (-0.115357) | 1.745658 / 1.452155 (0.293503) | 1.804137 / 1.492716 (0.311421) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302551 / 0.018006 (0.284544) | 0.498438 / 0.000490 (0.497948) | 0.038562 / 0.000200 (0.038362) | 0.000411 / 0.000054 (0.000356) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035573 / 0.037411 (-0.001839) | 0.104957 / 0.014526 (0.090431) | 0.117208 / 0.176557 (-0.059349) | 0.178935 / 0.737135 (-0.558200) | 0.124577 / 0.296338 (-0.171761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467076 / 0.215209 (0.251867) | 4.698852 / 2.077655 (2.621197) | 2.453389 / 1.504120 (0.949269) | 2.257378 / 1.541195 (0.716183) | 2.338615 / 1.468490 (0.870125) | 0.542379 / 4.584777 (-4.042398) | 4.066895 / 3.745712 (0.321183) | 3.689540 / 5.269862 (-1.580321) | 2.268997 / 4.565676 (-2.296679) | 0.064754 / 0.424275 (-0.359521) | 0.008866 / 0.007607 (0.001259) | 0.546732 / 0.226044 (0.320687) | 5.487765 / 2.268929 (3.218836) | 2.974126 / 55.444624 (-52.470498) | 2.585492 / 6.876477 (-4.290985) | 2.754417 / 2.142072 (0.612345) | 0.652045 / 4.805227 (-4.153183) | 0.145597 / 6.500664 (-6.355067) | 0.065415 / 0.075469 (-0.010054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553970 / 1.841788 (-0.287818) | 22.300954 / 8.074308 (14.226646) | 15.640990 / 10.191392 (5.449598) | 0.170903 / 0.680424 (-0.509521) | 0.021750 / 0.534201 (-0.512451) | 0.455316 / 0.579283 (-0.123967) | 0.455051 / 0.434364 (0.020687) | 0.536174 / 0.540337 (-0.004164) | 0.735930 / 1.386936 (-0.651006) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f68139846c26b43631bd235114854f4bf6cb9954 \"CML watermark\")\n" ]
2023-07-31T11:44:46
2023-08-01T10:48:52
2023-08-01T10:38:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6105", "html_url": "https://github.com/huggingface/datasets/pull/6105", "diff_url": "https://github.com/huggingface/datasets/pull/6105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6105.patch", "merged_at": "2023-08-01T10:38:54" }
Fix `resolve_pattern` for filesystems with tuple protocol. Fix #6100. The bug code lines were introduced by: - #6028
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6105/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6104/comments
https://api.github.com/repos/huggingface/datasets/issues/6104/events
https://github.com/huggingface/datasets/issues/6104
1,828,959,107
I_kwDODunzps5tA7OD
6,104
HF Datasets data access is extremely slow even when in memory
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462" ]
2023-07-31T11:12:19
2023-08-01T11:22:43
null
CONTRIBUTOR
null
null
null
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long? It's faster to produce the dataset from scratch than to access it from HF Datasets! ### Steps to reproduce the bug I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1). ```python #!/usr/bin/env python3 import sys import time import torch from datasets import load_dataset def main(dataset_name): # Start the timer start_time = time.time() # Load the dataset from Hugging Face Hub dataset = load_dataset(dataset_name) # Set the dataset format as torch dataset.set_format(type="torch") # Perform an identity map dataset = dataset.map(lambda example: example, batched=True, batch_size=20) # End the timer end_time = time.time() # Print the time taken print(f"Time taken: {end_time - start_time:.2f} seconds") if __name__ == "__main__": dataset_name = "NightMachinery/hf_datasets_bug1" print(f"dataset_name: {dataset_name}") main(dataset_name) ``` ### Expected behavior _ ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6104/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6103/comments
https://api.github.com/repos/huggingface/datasets/issues/6103/events
https://github.com/huggingface/datasets/pull/6103
1,828,515,165
PR_kwDODunzps5Ww2gV
6,103
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6103). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006528 / 0.011353 (-0.004825) | 0.003909 / 0.011008 (-0.007099) | 0.083954 / 0.038508 (0.045446) | 0.070513 / 0.023109 (0.047404) | 0.344362 / 0.275898 (0.068464) | 0.370278 / 0.323480 (0.046798) | 0.005395 / 0.007986 (-0.002591) | 0.003323 / 0.004328 (-0.001005) | 0.064538 / 0.004250 (0.060288) | 0.055616 / 0.037052 (0.018564) | 0.353590 / 0.258489 (0.095101) | 0.382159 / 0.293841 (0.088318) | 0.031133 / 0.128546 (-0.097414) | 0.008429 / 0.075646 (-0.067217) | 0.288665 / 0.419271 (-0.130606) | 0.052626 / 0.043533 (0.009093) | 0.347676 / 0.255139 (0.092537) | 0.363726 / 0.283200 (0.080526) | 0.021956 / 0.141683 (-0.119727) | 1.506091 / 1.452155 (0.053936) | 1.563940 / 1.492716 (0.071223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207658 / 0.018006 (0.189652) | 0.473411 / 0.000490 (0.472922) | 0.005437 / 0.000200 (0.005237) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027769 / 0.037411 (-0.009643) | 0.082566 / 0.014526 (0.068040) | 0.092700 / 0.176557 (-0.083857) | 0.152589 / 0.737135 (-0.584546) | 0.093772 / 0.296338 (-0.202566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401072 / 0.215209 (0.185863) | 3.997922 / 2.077655 (1.920267) | 2.028223 / 1.504120 (0.524103) | 1.845229 / 1.541195 (0.304035) | 1.883980 / 1.468490 (0.415489) | 0.485112 / 4.584777 (-4.099665) | 3.657048 / 3.745712 (-0.088664) | 4.998475 / 5.269862 (-0.271386) | 3.007417 / 4.565676 (-1.558259) | 0.057003 / 0.424275 (-0.367272) | 0.007270 / 0.007607 (-0.000338) | 0.482220 / 0.226044 (0.256176) | 4.817560 / 2.268929 (2.548631) | 2.484285 / 55.444624 (-52.960340) | 2.163327 / 6.876477 (-4.713149) | 2.326412 / 2.142072 (0.184339) | 0.600349 / 4.805227 (-4.204878) | 0.134245 / 6.500664 (-6.366419) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281440 / 1.841788 (-0.560347) | 19.165591 / 8.074308 (11.091283) | 14.007728 / 10.191392 (3.816336) | 0.168367 / 0.680424 (-0.512057) | 0.018149 / 0.534201 (-0.516052) | 0.391688 / 0.579283 (-0.187595) | 0.414528 / 0.434364 (-0.019836) | 0.456964 / 0.540337 (-0.083373) | 0.613807 / 1.386936 (-0.773129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004851) | 0.003956 / 0.011008 (-0.007052) | 0.064297 / 0.038508 (0.025789) | 0.073430 / 0.023109 (0.050321) | 0.364113 / 0.275898 (0.088215) | 0.389021 / 0.323480 (0.065541) | 0.005375 / 0.007986 (-0.002611) | 0.003363 / 0.004328 (-0.000966) | 0.064404 / 0.004250 (0.060153) | 0.056664 / 0.037052 (0.019612) | 0.365504 / 0.258489 (0.107015) | 0.398477 / 0.293841 (0.104636) | 0.031739 / 0.128546 (-0.096807) | 0.008663 / 0.075646 (-0.066984) | 0.070757 / 0.419271 (-0.348515) | 0.051014 / 0.043533 (0.007481) | 0.368287 / 0.255139 (0.113148) | 0.382941 / 0.283200 (0.099742) | 0.024642 / 0.141683 (-0.117041) | 1.516721 / 1.452155 (0.064567) | 1.557625 / 1.492716 (0.064908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208248 / 0.018006 (0.190242) | 0.443560 / 0.000490 (0.443070) | 0.004004 / 0.000200 (0.003805) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006295) | 0.086814 / 0.014526 (0.072288) | 0.099111 / 0.176557 (-0.077445) | 0.155032 / 0.737135 (-0.582104) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413080 / 0.215209 (0.197871) | 4.115546 / 2.077655 (2.037891) | 2.162073 / 1.504120 (0.657953) | 2.008107 / 1.541195 (0.466912) | 2.052317 / 1.468490 (0.583827) | 0.485158 / 4.584777 (-4.099619) | 3.617478 / 3.745712 (-0.128234) | 5.030564 / 5.269862 (-0.239298) | 2.787812 / 4.565676 (-1.777865) | 0.057466 / 0.424275 (-0.366809) | 0.007656 / 0.007607 (0.000049) | 0.490037 / 0.226044 (0.263993) | 4.887896 / 2.268929 (2.618968) | 2.639644 / 55.444624 (-52.804981) | 2.258051 / 6.876477 (-4.618426) | 2.417573 / 2.142072 (0.275500) | 0.604473 / 4.805227 (-4.200754) | 0.134770 / 6.500664 (-6.365894) | 0.061709 / 0.075469 (-0.013760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342500 / 1.841788 (-0.499288) | 19.354990 / 8.074308 (11.280682) | 14.161975 / 10.191392 (3.970583) | 0.157084 / 0.680424 (-0.523339) | 0.018227 / 0.534201 (-0.515974) | 0.391819 / 0.579283 (-0.187464) | 0.399157 / 0.434364 (-0.035207) | 0.460582 / 0.540337 (-0.079756) | 0.612183 / 1.386936 (-0.774753) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b20f6a82410dd47e89585bb932616a22e0eaf2e6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009318 / 0.011353 (-0.002035) | 0.005515 / 0.011008 (-0.005493) | 0.108532 / 0.038508 (0.070024) | 0.103583 / 0.023109 (0.080473) | 0.419249 / 0.275898 (0.143351) | 0.453573 / 0.323480 (0.130093) | 0.006601 / 0.007986 (-0.001384) | 0.005297 / 0.004328 (0.000968) | 0.082737 / 0.004250 (0.078487) | 0.064708 / 0.037052 (0.027656) | 0.425679 / 0.258489 (0.167190) | 0.462028 / 0.293841 (0.168187) | 0.048104 / 0.128546 (-0.080442) | 0.014069 / 0.075646 (-0.061577) | 0.377780 / 0.419271 (-0.041491) | 0.067510 / 0.043533 (0.023977) | 0.422421 / 0.255139 (0.167282) | 0.447127 / 0.283200 (0.163927) | 0.037745 / 0.141683 (-0.103938) | 1.855306 / 1.452155 (0.403152) | 1.943876 / 1.492716 (0.451160) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280161 / 0.018006 (0.262155) | 0.598001 / 0.000490 (0.597512) | 0.001130 / 0.000200 (0.000930) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036064 / 0.037411 (-0.001347) | 0.113256 / 0.014526 (0.098730) | 0.120598 / 0.176557 (-0.055959) | 0.191386 / 0.737135 (-0.545750) | 0.118125 / 0.296338 (-0.178214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616887 / 0.215209 (0.401678) | 6.085498 / 2.077655 (4.007844) | 2.639428 / 1.504120 (1.135308) | 2.215444 / 1.541195 (0.674249) | 2.311990 / 1.468490 (0.843500) | 0.820539 / 4.584777 (-3.764238) | 5.306010 / 3.745712 (1.560298) | 4.731726 / 5.269862 (-0.538136) | 3.053933 / 4.565676 (-1.511744) | 0.098862 / 0.424275 (-0.325413) | 0.009456 / 0.007607 (0.001849) | 0.725455 / 0.226044 (0.499411) | 7.367385 / 2.268929 (5.098457) | 3.464921 / 55.444624 (-51.979703) | 2.833868 / 6.876477 (-4.042608) | 3.033008 / 2.142072 (0.890935) | 1.036751 / 4.805227 (-3.768476) | 0.243646 / 6.500664 (-6.257018) | 0.081079 / 0.075469 (0.005610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584695 / 1.841788 (-0.257093) | 25.150355 / 8.074308 (17.076047) | 21.826622 / 10.191392 (11.635230) | 0.212502 / 0.680424 (-0.467921) | 0.029865 / 0.534201 (-0.504335) | 0.496814 / 0.579283 (-0.082470) | 0.611959 / 0.434364 (0.177595) | 0.550434 / 0.540337 (0.010097) | 0.800897 / 1.386936 (-0.586039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005236 / 0.011008 (-0.005772) | 0.082402 / 0.038508 (0.043894) | 0.090578 / 0.023109 (0.067468) | 0.487302 / 0.275898 (0.211404) | 0.523639 / 0.323480 (0.200159) | 0.006684 / 0.007986 (-0.001302) | 0.004306 / 0.004328 (-0.000023) | 0.083273 / 0.004250 (0.079023) | 0.068585 / 0.037052 (0.031532) | 0.487751 / 0.258489 (0.229262) | 0.538972 / 0.293841 (0.245131) | 0.048915 / 0.128546 (-0.079632) | 0.014312 / 0.075646 (-0.061335) | 0.091863 / 0.419271 (-0.327409) | 0.066114 / 0.043533 (0.022581) | 0.483552 / 0.255139 (0.228413) | 0.522250 / 0.283200 (0.239050) | 0.038533 / 0.141683 (-0.103150) | 1.803834 / 1.452155 (0.351680) | 1.891927 / 1.492716 (0.399211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336662 / 0.018006 (0.318656) | 0.611408 / 0.000490 (0.610918) | 0.014310 / 0.000200 (0.014110) | 0.000152 / 0.000054 (0.000097) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034755 / 0.037411 (-0.002656) | 0.101008 / 0.014526 (0.086483) | 0.124530 / 0.176557 (-0.052026) | 0.179844 / 0.737135 (-0.557292) | 0.125027 / 0.296338 (-0.171312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618341 / 0.215209 (0.403132) | 6.146848 / 2.077655 (4.069193) | 2.893305 / 1.504120 (1.389185) | 2.608722 / 1.541195 (1.067528) | 2.671276 / 1.468490 (1.202786) | 0.860096 / 4.584777 (-3.724681) | 5.440671 / 3.745712 (1.694959) | 4.776958 / 5.269862 (-0.492903) | 3.098300 / 4.565676 (-1.467376) | 0.098664 / 0.424275 (-0.325611) | 0.009270 / 0.007607 (0.001663) | 0.712780 / 0.226044 (0.486735) | 7.199721 / 2.268929 (4.930793) | 3.620723 / 55.444624 (-51.823902) | 3.052218 / 6.876477 (-3.824259) | 3.321093 / 2.142072 (1.179021) | 1.070992 / 4.805227 (-3.734235) | 0.224091 / 6.500664 (-6.276573) | 0.083395 / 0.075469 (0.007926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716867 / 1.841788 (-0.124921) | 25.534617 / 8.074308 (17.460309) | 25.221014 / 10.191392 (15.029621) | 0.248098 / 0.680424 (-0.432326) | 0.029659 / 0.534201 (-0.504542) | 0.492929 / 0.579283 (-0.086355) | 0.618253 / 0.434364 (0.183889) | 0.577108 / 0.540337 (0.036771) | 0.803188 / 1.386936 (-0.583748) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#584db360eed9155e173b199ba5fc037562b7b862 \"CML watermark\")\n" ]
2023-07-31T06:44:05
2023-07-31T06:55:58
2023-07-31T06:45:41
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6103", "html_url": "https://github.com/huggingface/datasets/pull/6103", "diff_url": "https://github.com/huggingface/datasets/pull/6103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6103.patch", "merged_at": "2023-07-31T06:45:41" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6103/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6102/comments
https://api.github.com/repos/huggingface/datasets/issues/6102/events
https://github.com/huggingface/datasets/pull/6102
1,828,494,896
PR_kwDODunzps5WwyGy
6,102
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006517 / 0.011353 (-0.004836) | 0.004217 / 0.011008 (-0.006792) | 0.083162 / 0.038508 (0.044654) | 0.074476 / 0.023109 (0.051367) | 0.321193 / 0.275898 (0.045295) | 0.358348 / 0.323480 (0.034868) | 0.005531 / 0.007986 (-0.002455) | 0.003621 / 0.004328 (-0.000707) | 0.063819 / 0.004250 (0.059568) | 0.056524 / 0.037052 (0.019471) | 0.322145 / 0.258489 (0.063656) | 0.371415 / 0.293841 (0.077574) | 0.030612 / 0.128546 (-0.097934) | 0.008907 / 0.075646 (-0.066739) | 0.289451 / 0.419271 (-0.129821) | 0.051959 / 0.043533 (0.008426) | 0.317729 / 0.255139 (0.062590) | 0.339750 / 0.283200 (0.056550) | 0.022430 / 0.141683 (-0.119253) | 1.487661 / 1.452155 (0.035506) | 1.554916 / 1.492716 (0.062199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296673 / 0.018006 (0.278667) | 0.599183 / 0.000490 (0.598694) | 0.002524 / 0.000200 (0.002324) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027898 / 0.037411 (-0.009514) | 0.080870 / 0.014526 (0.066344) | 0.094894 / 0.176557 (-0.081662) | 0.152350 / 0.737135 (-0.584785) | 0.095765 / 0.296338 (-0.200573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415442 / 0.215209 (0.200233) | 4.161155 / 2.077655 (2.083500) | 2.117061 / 1.504120 (0.612941) | 1.937846 / 1.541195 (0.396651) | 1.979635 / 1.468490 (0.511145) | 0.488381 / 4.584777 (-4.096396) | 3.509836 / 3.745712 (-0.235876) | 3.833074 / 5.269862 (-1.436788) | 2.307536 / 4.565676 (-2.258141) | 0.057059 / 0.424275 (-0.367216) | 0.007366 / 0.007607 (-0.000241) | 0.487752 / 0.226044 (0.261708) | 4.869406 / 2.268929 (2.600478) | 2.594775 / 55.444624 (-52.849849) | 2.191712 / 6.876477 (-4.684765) | 2.413220 / 2.142072 (0.271147) | 0.584513 / 4.805227 (-4.220714) | 0.132162 / 6.500664 (-6.368502) | 0.061059 / 0.075469 (-0.014410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245178 / 1.841788 (-0.596610) | 20.624563 / 8.074308 (12.550255) | 14.675545 / 10.191392 (4.484153) | 0.165838 / 0.680424 (-0.514586) | 0.018700 / 0.534201 (-0.515501) | 0.392475 / 0.579283 (-0.186808) | 0.399884 / 0.434364 (-0.034480) | 0.457478 / 0.540337 (-0.082859) | 0.624553 / 1.386936 (-0.762383) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006716 / 0.011353 (-0.004637) | 0.004308 / 0.011008 (-0.006700) | 0.064495 / 0.038508 (0.025987) | 0.083194 / 0.023109 (0.060085) | 0.371994 / 0.275898 (0.096096) | 0.433045 / 0.323480 (0.109566) | 0.005535 / 0.007986 (-0.002450) | 0.003469 / 0.004328 (-0.000859) | 0.064342 / 0.004250 (0.060092) | 0.059362 / 0.037052 (0.022309) | 0.393819 / 0.258489 (0.135330) | 0.442591 / 0.293841 (0.148750) | 0.031594 / 0.128546 (-0.096952) | 0.008943 / 0.075646 (-0.066703) | 0.070689 / 0.419271 (-0.348582) | 0.049219 / 0.043533 (0.005686) | 0.361568 / 0.255139 (0.106429) | 0.417085 / 0.283200 (0.133886) | 0.025112 / 0.141683 (-0.116571) | 1.497204 / 1.452155 (0.045049) | 1.552781 / 1.492716 (0.060064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325254 / 0.018006 (0.307248) | 0.528399 / 0.000490 (0.527909) | 0.007429 / 0.000200 (0.007229) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029908 / 0.037411 (-0.007504) | 0.087114 / 0.014526 (0.072588) | 0.103366 / 0.176557 (-0.073191) | 0.155145 / 0.737135 (-0.581990) | 0.103458 / 0.296338 (-0.192880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.093327 / 2.077655 (2.015673) | 2.154115 / 1.504120 (0.649995) | 1.953492 / 1.541195 (0.412297) | 2.021532 / 1.468490 (0.553042) | 0.478928 / 4.584777 (-4.105849) | 3.515287 / 3.745712 (-0.230426) | 4.976239 / 5.269862 (-0.293623) | 2.832803 / 4.565676 (-1.732873) | 0.057239 / 0.424275 (-0.367036) | 0.007718 / 0.007607 (0.000111) | 0.484102 / 0.226044 (0.258057) | 4.833020 / 2.268929 (2.564092) | 2.564550 / 55.444624 (-52.880074) | 2.268969 / 6.876477 (-4.607508) | 2.513308 / 2.142072 (0.371235) | 0.582822 / 4.805227 (-4.222406) | 0.133989 / 6.500664 (-6.366675) | 0.062078 / 0.075469 (-0.013391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.393766 / 1.841788 (-0.448021) | 20.224546 / 8.074308 (12.150238) | 14.359438 / 10.191392 (4.168046) | 0.166358 / 0.680424 (-0.514066) | 0.018840 / 0.534201 (-0.515361) | 0.393206 / 0.579283 (-0.186077) | 0.404220 / 0.434364 (-0.030144) | 0.462346 / 0.540337 (-0.077992) | 0.603078 / 1.386936 (-0.783858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006835 / 0.011353 (-0.004518) | 0.004530 / 0.011008 (-0.006478) | 0.087506 / 0.038508 (0.048997) | 0.088289 / 0.023109 (0.065180) | 0.351575 / 0.275898 (0.075677) | 0.391873 / 0.323480 (0.068393) | 0.005627 / 0.007986 (-0.002359) | 0.003735 / 0.004328 (-0.000594) | 0.065747 / 0.004250 (0.061497) | 0.058779 / 0.037052 (0.021726) | 0.358076 / 0.258489 (0.099587) | 0.408466 / 0.293841 (0.114626) | 0.031369 / 0.128546 (-0.097178) | 0.008807 / 0.075646 (-0.066839) | 0.293253 / 0.419271 (-0.126019) | 0.052950 / 0.043533 (0.009417) | 0.350411 / 0.255139 (0.095272) | 0.384827 / 0.283200 (0.101627) | 0.026219 / 0.141683 (-0.115464) | 1.464290 / 1.452155 (0.012136) | 1.549688 / 1.492716 (0.056972) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270354 / 0.018006 (0.252348) | 0.593436 / 0.000490 (0.592946) | 0.003872 / 0.000200 (0.003673) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031625 / 0.037411 (-0.005787) | 0.092599 / 0.014526 (0.078073) | 0.104619 / 0.176557 (-0.071938) | 0.163183 / 0.737135 (-0.573952) | 0.103245 / 0.296338 (-0.193094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390213 / 0.215209 (0.175004) | 3.894519 / 2.077655 (1.816864) | 1.905739 / 1.504120 (0.401619) | 1.728873 / 1.541195 (0.187678) | 1.838692 / 1.468490 (0.370202) | 0.484730 / 4.584777 (-4.100047) | 3.706749 / 3.745712 (-0.038963) | 5.572311 / 5.269862 (0.302449) | 3.389949 / 4.565676 (-1.175727) | 0.057315 / 0.424275 (-0.366960) | 0.007475 / 0.007607 (-0.000132) | 0.464690 / 0.226044 (0.238645) | 4.622242 / 2.268929 (2.353314) | 2.380957 / 55.444624 (-53.063667) | 2.038225 / 6.876477 (-4.838251) | 2.358881 / 2.142072 (0.216809) | 0.606358 / 4.805227 (-4.198869) | 0.133584 / 6.500664 (-6.367080) | 0.061894 / 0.075469 (-0.013575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259575 / 1.841788 (-0.582213) | 20.915216 / 8.074308 (12.840908) | 14.971952 / 10.191392 (4.780560) | 0.160206 / 0.680424 (-0.520218) | 0.018675 / 0.534201 (-0.515526) | 0.396821 / 0.579283 (-0.182462) | 0.430982 / 0.434364 (-0.003382) | 0.452895 / 0.540337 (-0.087443) | 0.647869 / 1.386936 (-0.739067) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007194 / 0.011353 (-0.004158) | 0.004340 / 0.011008 (-0.006669) | 0.065125 / 0.038508 (0.026617) | 0.096243 / 0.023109 (0.073134) | 0.374361 / 0.275898 (0.098463) | 0.411863 / 0.323480 (0.088383) | 0.005813 / 0.007986 (-0.002172) | 0.003615 / 0.004328 (-0.000713) | 0.064953 / 0.004250 (0.060703) | 0.063171 / 0.037052 (0.026119) | 0.376238 / 0.258489 (0.117749) | 0.415826 / 0.293841 (0.121985) | 0.031926 / 0.128546 (-0.096620) | 0.008821 / 0.075646 (-0.066825) | 0.072150 / 0.419271 (-0.347122) | 0.049484 / 0.043533 (0.005951) | 0.369691 / 0.255139 (0.114552) | 0.390669 / 0.283200 (0.107470) | 0.025732 / 0.141683 (-0.115950) | 1.493833 / 1.452155 (0.041679) | 1.601786 / 1.492716 (0.109070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284279 / 0.018006 (0.266272) | 0.585909 / 0.000490 (0.585419) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033642 / 0.037411 (-0.003769) | 0.095328 / 0.014526 (0.080802) | 0.105810 / 0.176557 (-0.070746) | 0.159779 / 0.737135 (-0.577357) | 0.108938 / 0.296338 (-0.187400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408112 / 0.215209 (0.192902) | 4.067035 / 2.077655 (1.989380) | 2.114504 / 1.504120 (0.610384) | 1.944027 / 1.541195 (0.402832) | 2.066117 / 1.468490 (0.597627) | 0.486441 / 4.584777 (-4.098336) | 3.622659 / 3.745712 (-0.123053) | 3.399310 / 5.269862 (-1.870552) | 2.183151 / 4.565676 (-2.382525) | 0.057490 / 0.424275 (-0.366785) | 0.007955 / 0.007607 (0.000347) | 0.490221 / 0.226044 (0.264177) | 4.887301 / 2.268929 (2.618373) | 2.679806 / 55.444624 (-52.764819) | 2.258992 / 6.876477 (-4.617484) | 2.592493 / 2.142072 (0.450420) | 0.606515 / 4.805227 (-4.198712) | 0.135645 / 6.500664 (-6.365019) | 0.063956 / 0.075469 (-0.011513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331304 / 1.841788 (-0.510483) | 21.458611 / 8.074308 (13.384303) | 14.898964 / 10.191392 (4.707572) | 0.172110 / 0.680424 (-0.508314) | 0.018791 / 0.534201 (-0.515409) | 0.395944 / 0.579283 (-0.183339) | 0.424526 / 0.434364 (-0.009838) | 0.462517 / 0.540337 (-0.077821) | 0.610139 / 1.386936 (-0.776797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005957 / 0.011353 (-0.005396) | 0.003581 / 0.011008 (-0.007427) | 0.079624 / 0.038508 (0.041116) | 0.058004 / 0.023109 (0.034895) | 0.309345 / 0.275898 (0.033447) | 0.346653 / 0.323480 (0.023173) | 0.005420 / 0.007986 (-0.002566) | 0.002906 / 0.004328 (-0.001423) | 0.061970 / 0.004250 (0.057720) | 0.047627 / 0.037052 (0.010575) | 0.314096 / 0.258489 (0.055607) | 0.361368 / 0.293841 (0.067527) | 0.027211 / 0.128546 (-0.101335) | 0.007853 / 0.075646 (-0.067793) | 0.260202 / 0.419271 (-0.159070) | 0.045308 / 0.043533 (0.001775) | 0.312150 / 0.255139 (0.057011) | 0.341085 / 0.283200 (0.057886) | 0.021302 / 0.141683 (-0.120381) | 1.430315 / 1.452155 (-0.021840) | 1.608989 / 1.492716 (0.116273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185289 / 0.018006 (0.167283) | 0.423318 / 0.000490 (0.422828) | 0.005741 / 0.000200 (0.005541) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023777 / 0.037411 (-0.013634) | 0.071937 / 0.014526 (0.057412) | 0.079406 / 0.176557 (-0.097151) | 0.143815 / 0.737135 (-0.593320) | 0.081648 / 0.296338 (-0.214690) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431514 / 0.215209 (0.216305) | 4.314471 / 2.077655 (2.236817) | 2.305167 / 1.504120 (0.801047) | 2.137894 / 1.541195 (0.596699) | 2.161034 / 1.468490 (0.692544) | 0.511701 / 4.584777 (-4.073076) | 3.098213 / 3.745712 (-0.647499) | 4.086837 / 5.269862 (-1.183024) | 2.517184 / 4.565676 (-2.048492) | 0.058272 / 0.424275 (-0.366003) | 0.006415 / 0.007607 (-0.001192) | 0.504792 / 0.226044 (0.278747) | 5.046758 / 2.268929 (2.777829) | 2.752049 / 55.444624 (-52.692576) | 2.407707 / 6.876477 (-4.468770) | 2.532162 / 2.142072 (0.390090) | 0.597562 / 4.805227 (-4.207666) | 0.125935 / 6.500664 (-6.374729) | 0.060837 / 0.075469 (-0.014632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257048 / 1.841788 (-0.584740) | 17.877849 / 8.074308 (9.803541) | 13.904805 / 10.191392 (3.713413) | 0.131647 / 0.680424 (-0.548776) | 0.016975 / 0.534201 (-0.517226) | 0.329651 / 0.579283 (-0.249633) | 0.354358 / 0.434364 (-0.080006) | 0.377545 / 0.540337 (-0.162792) | 0.545593 / 1.386936 (-0.841343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005839 / 0.011353 (-0.005514) | 0.003580 / 0.011008 (-0.007428) | 0.062204 / 0.038508 (0.023696) | 0.057943 / 0.023109 (0.034834) | 0.400165 / 0.275898 (0.124267) | 0.427911 / 0.323480 (0.104431) | 0.004412 / 0.007986 (-0.003574) | 0.002794 / 0.004328 (-0.001534) | 0.062933 / 0.004250 (0.058683) | 0.046243 / 0.037052 (0.009191) | 0.413640 / 0.258489 (0.155151) | 0.418592 / 0.293841 (0.124751) | 0.027020 / 0.128546 (-0.101526) | 0.007927 / 0.075646 (-0.067720) | 0.067581 / 0.419271 (-0.351691) | 0.041927 / 0.043533 (-0.001606) | 0.381863 / 0.255139 (0.126724) | 0.415711 / 0.283200 (0.132511) | 0.019827 / 0.141683 (-0.121856) | 1.464049 / 1.452155 (0.011894) | 1.528387 / 1.492716 (0.035671) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224999 / 0.018006 (0.206993) | 0.419167 / 0.000490 (0.418678) | 0.000363 / 0.000200 (0.000163) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024827 / 0.037411 (-0.012585) | 0.077134 / 0.014526 (0.062608) | 0.085142 / 0.176557 (-0.091414) | 0.137400 / 0.737135 (-0.599735) | 0.086434 / 0.296338 (-0.209905) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452716 / 0.215209 (0.237507) | 4.530610 / 2.077655 (2.452955) | 2.467309 / 1.504120 (0.963189) | 2.300441 / 1.541195 (0.759246) | 2.323475 / 1.468490 (0.854985) | 0.501847 / 4.584777 (-4.082930) | 3.079432 / 3.745712 (-0.666280) | 2.793107 / 5.269862 (-2.476755) | 1.835010 / 4.565676 (-2.730666) | 0.057698 / 0.424275 (-0.366577) | 0.006756 / 0.007607 (-0.000851) | 0.529062 / 0.226044 (0.303017) | 5.287822 / 2.268929 (3.018894) | 2.908411 / 55.444624 (-52.536214) | 2.571627 / 6.876477 (-4.304850) | 2.691188 / 2.142072 (0.549116) | 0.592289 / 4.805227 (-4.212938) | 0.126091 / 6.500664 (-6.374573) | 0.062312 / 0.075469 (-0.013157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328854 / 1.841788 (-0.512933) | 18.185628 / 8.074308 (10.111320) | 13.858781 / 10.191392 (3.667389) | 0.142421 / 0.680424 (-0.538003) | 0.016535 / 0.534201 (-0.517666) | 0.330839 / 0.579283 (-0.248444) | 0.346559 / 0.434364 (-0.087805) | 0.389153 / 0.540337 (-0.151185) | 0.516897 / 1.386936 (-0.870039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n" ]
2023-07-31T06:27:47
2023-07-31T06:48:09
2023-07-31T06:32:58
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6102", "html_url": "https://github.com/huggingface/datasets/pull/6102", "diff_url": "https://github.com/huggingface/datasets/pull/6102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6102.patch", "merged_at": "2023-07-31T06:32:58" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6102/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6101/comments
https://api.github.com/repos/huggingface/datasets/issues/6101/events
https://github.com/huggingface/datasets/pull/6101
1,828,469,648
PR_kwDODunzps5WwspW
6,101
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006543 / 0.011353 (-0.004810) | 0.003894 / 0.011008 (-0.007115) | 0.084742 / 0.038508 (0.046234) | 0.072942 / 0.023109 (0.049833) | 0.310722 / 0.275898 (0.034824) | 0.346806 / 0.323480 (0.023326) | 0.005373 / 0.007986 (-0.002613) | 0.003270 / 0.004328 (-0.001059) | 0.064379 / 0.004250 (0.060128) | 0.054876 / 0.037052 (0.017824) | 0.316794 / 0.258489 (0.058305) | 0.350353 / 0.293841 (0.056512) | 0.030683 / 0.128546 (-0.097863) | 0.008275 / 0.075646 (-0.067371) | 0.288747 / 0.419271 (-0.130525) | 0.051892 / 0.043533 (0.008359) | 0.315060 / 0.255139 (0.059921) | 0.331664 / 0.283200 (0.048464) | 0.023334 / 0.141683 (-0.118349) | 1.499734 / 1.452155 (0.047579) | 1.542006 / 1.492716 (0.049290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210488 / 0.018006 (0.192482) | 0.462187 / 0.000490 (0.461697) | 0.001280 / 0.000200 (0.001080) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027812 / 0.037411 (-0.009599) | 0.082492 / 0.014526 (0.067966) | 0.096504 / 0.176557 (-0.080053) | 0.158164 / 0.737135 (-0.578972) | 0.096678 / 0.296338 (-0.199661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403317 / 0.215209 (0.188108) | 4.008367 / 2.077655 (1.930713) | 2.033067 / 1.504120 (0.528947) | 1.869484 / 1.541195 (0.328290) | 1.947450 / 1.468490 (0.478960) | 0.494048 / 4.584777 (-4.090729) | 3.631673 / 3.745712 (-0.114039) | 5.322167 / 5.269862 (0.052306) | 3.125570 / 4.565676 (-1.440107) | 0.057341 / 0.424275 (-0.366934) | 0.007318 / 0.007607 (-0.000289) | 0.483990 / 0.226044 (0.257945) | 4.830573 / 2.268929 (2.561645) | 2.543267 / 55.444624 (-52.901358) | 2.217890 / 6.876477 (-4.658587) | 2.435111 / 2.142072 (0.293038) | 0.597920 / 4.805227 (-4.207307) | 0.132690 / 6.500664 (-6.367974) | 0.060160 / 0.075469 (-0.015309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247656 / 1.841788 (-0.594131) | 19.436984 / 8.074308 (11.362675) | 14.504249 / 10.191392 (4.312857) | 0.167444 / 0.680424 (-0.512980) | 0.018214 / 0.534201 (-0.515987) | 0.394790 / 0.579283 (-0.184493) | 0.413770 / 0.434364 (-0.020594) | 0.474290 / 0.540337 (-0.066048) | 0.646782 / 1.386936 (-0.740154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006575 / 0.011353 (-0.004778) | 0.003924 / 0.011008 (-0.007084) | 0.064402 / 0.038508 (0.025893) | 0.072569 / 0.023109 (0.049460) | 0.361981 / 0.275898 (0.086083) | 0.398660 / 0.323480 (0.075180) | 0.005380 / 0.007986 (-0.002605) | 0.003355 / 0.004328 (-0.000974) | 0.065173 / 0.004250 (0.060923) | 0.057120 / 0.037052 (0.020067) | 0.366347 / 0.258489 (0.107858) | 0.402723 / 0.293841 (0.108882) | 0.031258 / 0.128546 (-0.097288) | 0.008499 / 0.075646 (-0.067147) | 0.070558 / 0.419271 (-0.348714) | 0.050089 / 0.043533 (0.006556) | 0.361280 / 0.255139 (0.106141) | 0.384497 / 0.283200 (0.101297) | 0.024789 / 0.141683 (-0.116893) | 1.492577 / 1.452155 (0.040422) | 1.572242 / 1.492716 (0.079525) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228054 / 0.018006 (0.210048) | 0.448317 / 0.000490 (0.447828) | 0.000368 / 0.000200 (0.000168) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.088604 / 0.014526 (0.074078) | 0.099317 / 0.176557 (-0.077239) | 0.152455 / 0.737135 (-0.584680) | 0.100444 / 0.296338 (-0.195894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411876 / 0.215209 (0.196667) | 4.108187 / 2.077655 (2.030532) | 2.096371 / 1.504120 (0.592251) | 1.923532 / 1.541195 (0.382337) | 1.998345 / 1.468490 (0.529855) | 0.483853 / 4.584777 (-4.100924) | 3.622433 / 3.745712 (-0.123279) | 3.254430 / 5.269862 (-2.015431) | 2.044342 / 4.565676 (-2.521334) | 0.056756 / 0.424275 (-0.367519) | 0.007720 / 0.007607 (0.000113) | 0.487656 / 0.226044 (0.261612) | 4.882024 / 2.268929 (2.613096) | 2.585008 / 55.444624 (-52.859616) | 2.229251 / 6.876477 (-4.647225) | 2.408318 / 2.142072 (0.266246) | 0.617537 / 4.805227 (-4.187691) | 0.132102 / 6.500664 (-6.368562) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362077 / 1.841788 (-0.479711) | 19.750714 / 8.074308 (11.676406) | 14.545299 / 10.191392 (4.353907) | 0.168666 / 0.680424 (-0.511758) | 0.018606 / 0.534201 (-0.515595) | 0.394760 / 0.579283 (-0.184523) | 0.410030 / 0.434364 (-0.024334) | 0.464742 / 0.540337 (-0.075596) | 0.610881 / 1.386936 (-0.776055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005836 / 0.011353 (-0.005517) | 0.003493 / 0.011008 (-0.007515) | 0.079877 / 0.038508 (0.041369) | 0.057299 / 0.023109 (0.034190) | 0.332945 / 0.275898 (0.057047) | 0.386615 / 0.323480 (0.063135) | 0.004437 / 0.007986 (-0.003548) | 0.002758 / 0.004328 (-0.001571) | 0.062668 / 0.004250 (0.058418) | 0.046135 / 0.037052 (0.009083) | 0.346160 / 0.258489 (0.087671) | 0.416720 / 0.293841 (0.122879) | 0.026678 / 0.128546 (-0.101868) | 0.007893 / 0.075646 (-0.067753) | 0.260427 / 0.419271 (-0.158845) | 0.044240 / 0.043533 (0.000707) | 0.328101 / 0.255139 (0.072963) | 0.380072 / 0.283200 (0.096872) | 0.020813 / 0.141683 (-0.120870) | 1.400202 / 1.452155 (-0.051952) | 1.475627 / 1.492716 (-0.017089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174479 / 0.018006 (0.156473) | 0.413810 / 0.000490 (0.413320) | 0.003059 / 0.000200 (0.002860) | 0.000212 / 0.000054 (0.000157) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023422 / 0.037411 (-0.013990) | 0.071519 / 0.014526 (0.056993) | 0.080555 / 0.176557 (-0.096001) | 0.143825 / 0.737135 (-0.593311) | 0.081182 / 0.296338 (-0.215157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406858 / 0.215209 (0.191648) | 4.161475 / 2.077655 (2.083820) | 1.991800 / 1.504120 (0.487680) | 1.811224 / 1.541195 (0.270030) | 1.828809 / 1.468490 (0.360318) | 0.504882 / 4.584777 (-4.079895) | 2.985010 / 3.745712 (-0.760703) | 3.984856 / 5.269862 (-1.285006) | 2.477936 / 4.565676 (-2.087740) | 0.057553 / 0.424275 (-0.366722) | 0.006436 / 0.007607 (-0.001172) | 0.488061 / 0.226044 (0.262016) | 4.805501 / 2.268929 (2.536573) | 2.446508 / 55.444624 (-52.998116) | 2.051406 / 6.876477 (-4.825071) | 2.177696 / 2.142072 (0.035623) | 0.588021 / 4.805227 (-4.217207) | 0.125118 / 6.500664 (-6.375546) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197130 / 1.841788 (-0.644658) | 17.867450 / 8.074308 (9.793142) | 13.536895 / 10.191392 (3.345503) | 0.137603 / 0.680424 (-0.542821) | 0.016706 / 0.534201 (-0.517495) | 0.327642 / 0.579283 (-0.251641) | 0.347201 / 0.434364 (-0.087163) | 0.379570 / 0.540337 (-0.160768) | 0.517825 / 1.386936 (-0.869111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005769 / 0.011353 (-0.005584) | 0.003414 / 0.011008 (-0.007594) | 0.063198 / 0.038508 (0.024690) | 0.056020 / 0.023109 (0.032911) | 0.393333 / 0.275898 (0.117435) | 0.421166 / 0.323480 (0.097686) | 0.004360 / 0.007986 (-0.003626) | 0.002860 / 0.004328 (-0.001469) | 0.062712 / 0.004250 (0.058461) | 0.045363 / 0.037052 (0.008311) | 0.413156 / 0.258489 (0.154667) | 0.422897 / 0.293841 (0.129056) | 0.027092 / 0.128546 (-0.101455) | 0.007960 / 0.075646 (-0.067687) | 0.068531 / 0.419271 (-0.350740) | 0.041402 / 0.043533 (-0.002131) | 0.377008 / 0.255139 (0.121869) | 0.409142 / 0.283200 (0.125942) | 0.019707 / 0.141683 (-0.121976) | 1.440556 / 1.452155 (-0.011599) | 1.487403 / 1.492716 (-0.005314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224355 / 0.018006 (0.206349) | 0.397855 / 0.000490 (0.397365) | 0.000363 / 0.000200 (0.000163) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025107 / 0.037411 (-0.012305) | 0.076404 / 0.014526 (0.061878) | 0.083194 / 0.176557 (-0.093362) | 0.135347 / 0.737135 (-0.601789) | 0.084786 / 0.296338 (-0.211553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433024 / 0.215209 (0.217815) | 4.323879 / 2.077655 (2.246224) | 2.263004 / 1.504120 (0.758884) | 2.072053 / 1.541195 (0.530858) | 2.113916 / 1.468490 (0.645426) | 0.502742 / 4.584777 (-4.082035) | 3.001716 / 3.745712 (-0.743996) | 2.777960 / 5.269862 (-2.491901) | 1.826514 / 4.565676 (-2.739162) | 0.057735 / 0.424275 (-0.366540) | 0.006671 / 0.007607 (-0.000937) | 0.503347 / 0.226044 (0.277303) | 5.037308 / 2.268929 (2.768380) | 2.679146 / 55.444624 (-52.765478) | 2.410899 / 6.876477 (-4.465577) | 2.467341 / 2.142072 (0.325268) | 0.589824 / 4.805227 (-4.215403) | 0.125529 / 6.500664 (-6.375135) | 0.061950 / 0.075469 (-0.013520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304128 / 1.841788 (-0.537659) | 17.950215 / 8.074308 (9.875907) | 13.673768 / 10.191392 (3.482376) | 0.129863 / 0.680424 (-0.550561) | 0.016720 / 0.534201 (-0.517481) | 0.329795 / 0.579283 (-0.249488) | 0.339057 / 0.434364 (-0.095307) | 0.382279 / 0.540337 (-0.158059) | 0.507337 / 1.386936 (-0.879599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef05b6f99a2b19990c6f5e4e28d95d28781570db \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006199 / 0.011353 (-0.005154) | 0.003749 / 0.011008 (-0.007259) | 0.080600 / 0.038508 (0.042092) | 0.061017 / 0.023109 (0.037908) | 0.319966 / 0.275898 (0.044067) | 0.354937 / 0.323480 (0.031457) | 0.004854 / 0.007986 (-0.003131) | 0.002996 / 0.004328 (-0.001333) | 0.063100 / 0.004250 (0.058849) | 0.050063 / 0.037052 (0.013011) | 0.316744 / 0.258489 (0.058255) | 0.358001 / 0.293841 (0.064160) | 0.027503 / 0.128546 (-0.101043) | 0.007876 / 0.075646 (-0.067771) | 0.262211 / 0.419271 (-0.157060) | 0.045717 / 0.043533 (0.002184) | 0.317188 / 0.255139 (0.062049) | 0.342404 / 0.283200 (0.059205) | 0.020194 / 0.141683 (-0.121489) | 1.498672 / 1.452155 (0.046517) | 1.545479 / 1.492716 (0.052762) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210985 / 0.018006 (0.192979) | 0.433592 / 0.000490 (0.433102) | 0.002864 / 0.000200 (0.002664) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023463 / 0.037411 (-0.013948) | 0.073375 / 0.014526 (0.058850) | 0.083082 / 0.176557 (-0.093475) | 0.142583 / 0.737135 (-0.594552) | 0.084267 / 0.296338 (-0.212071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412890 / 0.215209 (0.197681) | 4.131421 / 2.077655 (2.053766) | 1.969164 / 1.504120 (0.465044) | 1.772379 / 1.541195 (0.231185) | 1.834154 / 1.468490 (0.365664) | 0.496290 / 4.584777 (-4.088487) | 3.056504 / 3.745712 (-0.689208) | 3.400962 / 5.269862 (-1.868900) | 2.120575 / 4.565676 (-2.445101) | 0.056932 / 0.424275 (-0.367343) | 0.006412 / 0.007607 (-0.001195) | 0.484521 / 0.226044 (0.258477) | 4.817474 / 2.268929 (2.548545) | 2.464075 / 55.444624 (-52.980549) | 2.085056 / 6.876477 (-4.791421) | 2.324516 / 2.142072 (0.182444) | 0.592013 / 4.805227 (-4.213214) | 0.132232 / 6.500664 (-6.368432) | 0.062825 / 0.075469 (-0.012645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228080 / 1.841788 (-0.613708) | 18.555385 / 8.074308 (10.481077) | 13.939565 / 10.191392 (3.748173) | 0.145979 / 0.680424 (-0.534445) | 0.016823 / 0.534201 (-0.517377) | 0.330569 / 0.579283 (-0.248714) | 0.358094 / 0.434364 (-0.076270) | 0.384642 / 0.540337 (-0.155696) | 0.518347 / 1.386936 (-0.868589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006198 / 0.011353 (-0.005155) | 0.003670 / 0.011008 (-0.007338) | 0.062502 / 0.038508 (0.023994) | 0.064339 / 0.023109 (0.041229) | 0.428414 / 0.275898 (0.152516) | 0.463899 / 0.323480 (0.140420) | 0.005524 / 0.007986 (-0.002462) | 0.002915 / 0.004328 (-0.001413) | 0.062521 / 0.004250 (0.058270) | 0.051182 / 0.037052 (0.014130) | 0.431144 / 0.258489 (0.172655) | 0.469465 / 0.293841 (0.175624) | 0.027463 / 0.128546 (-0.101083) | 0.007974 / 0.075646 (-0.067673) | 0.068029 / 0.419271 (-0.351242) | 0.042123 / 0.043533 (-0.001409) | 0.428667 / 0.255139 (0.173528) | 0.455917 / 0.283200 (0.172717) | 0.023264 / 0.141683 (-0.118419) | 1.426986 / 1.452155 (-0.025168) | 1.500049 / 1.492716 (0.007332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207264 / 0.018006 (0.189258) | 0.440738 / 0.000490 (0.440248) | 0.000802 / 0.000200 (0.000602) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026245 / 0.037411 (-0.011166) | 0.078749 / 0.014526 (0.064223) | 0.087873 / 0.176557 (-0.088684) | 0.141518 / 0.737135 (-0.595617) | 0.089811 / 0.296338 (-0.206527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418955 / 0.215209 (0.203746) | 4.177881 / 2.077655 (2.100226) | 2.162678 / 1.504120 (0.658558) | 1.998969 / 1.541195 (0.457775) | 2.066720 / 1.468490 (0.598230) | 0.496850 / 4.584777 (-4.087927) | 3.041179 / 3.745712 (-0.704534) | 4.126039 / 5.269862 (-1.143823) | 2.740507 / 4.565676 (-1.825169) | 0.058025 / 0.424275 (-0.366250) | 0.006846 / 0.007607 (-0.000761) | 0.493281 / 0.226044 (0.267237) | 4.930196 / 2.268929 (2.661268) | 2.685152 / 55.444624 (-52.759472) | 2.378247 / 6.876477 (-4.498230) | 2.469103 / 2.142072 (0.327031) | 0.585346 / 4.805227 (-4.219882) | 0.126099 / 6.500664 (-6.374565) | 0.062946 / 0.075469 (-0.012523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313892 / 1.841788 (-0.527896) | 19.177117 / 8.074308 (11.102809) | 14.081321 / 10.191392 (3.889929) | 0.133948 / 0.680424 (-0.546476) | 0.017128 / 0.534201 (-0.517073) | 0.332241 / 0.579283 (-0.247042) | 0.373218 / 0.434364 (-0.061145) | 0.395308 / 0.540337 (-0.145030) | 0.529883 / 1.386936 (-0.857053) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16f7c7677942083436062b904b74643accb9bcac \"CML watermark\")\n" ]
2023-07-31T06:05:36
2023-07-31T06:33:00
2023-07-31T06:18:17
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6101", "html_url": "https://github.com/huggingface/datasets/pull/6101", "diff_url": "https://github.com/huggingface/datasets/pull/6101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6101.patch", "merged_at": "2023-07-31T06:18:17" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6101/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6100/comments
https://api.github.com/repos/huggingface/datasets/issues/6100/events
https://github.com/huggingface/datasets/issues/6100
1,828,118,930
I_kwDODunzps5s9uGS
6,100
TypeError when loading from GCP bucket
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ", "We have fixed it. We are planning to do a patch release today." ]
2023-07-30T23:03:00
2023-08-03T10:00:48
2023-08-01T10:38:55
NONE
null
null
null
### Describe the bug Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1. ### Steps to reproduce the bug Load any file from a GCP bucket: ```python import datasets datasets.load_dataset("json", data_files=["gs://..."]) ``` The following exception is raised: ```python Traceback (most recent call last): ... packages/datasets/data_files.py", line 335, in resolve_pattern protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else "" TypeError: can only concatenate tuple (not "str") to tuple ``` With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string. ### Expected behavior The file should be loaded without exception. ### Environment info - `datasets` version: 2.14.1 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6100/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6099/comments
https://api.github.com/repos/huggingface/datasets/issues/6099/events
https://github.com/huggingface/datasets/issues/6099
1,827,893,576
I_kwDODunzps5s83FI
6,099
How do i get "amazon_us_reviews
{ "login": "IqraBaluch", "id": 57810189, "node_id": "MDQ6VXNlcjU3ODEwMTg5", "avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IqraBaluch", "html_url": "https://github.com/IqraBaluch", "followers_url": "https://api.github.com/users/IqraBaluch/followers", "following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}", "gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}", "starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions", "organizations_url": "https://api.github.com/users/IqraBaluch/orgs", "repos_url": "https://api.github.com/users/IqraBaluch/repos", "events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}", "received_events_url": "https://api.github.com/users/IqraBaluch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\r\ntry:\r\n example = self.info.features.encode_example(record) if self.info.features is not None else record\r\nexcept Exception as e:\r\n print(record)\r\n```\r\n\r\n⬇️\r\n\r\n```\r\n{'<?xml version=\"1.0\" encoding=\"UTF-8\"?>': '<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>N2HFJ82ZV8SZW9BV</RequestId><HostId>Zw2DQ0V2GdRmvH5qWEpumK4uj5+W8YPcilQbN9fLBr3VqQOcKPHOhUZLG3LcM9X5fkOetxp48Os=</HostId></Error>'}\r\n```", "I'm getting same errors when loading this dataset", "I have figured it out. there was an option of **parquet formated files** i downloaded some from there. ", "this dataset is unfortunately no longer public", "Thanks for reporting, @IqraBaluch.\r\n\r\nWe contacted the authors and unfortunately they reported that Amazon has decided to stop distributing this dataset.", "If anyone still needs this dataset, you could find it on kaggle here : https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset", "Thanks @Maryam-Mostafa ", "@albertvillanova don't tell 'em, we have figured it out. XD" ]
2023-07-30T11:02:17
2023-08-10T05:02:36
2023-08-10T05:02:35
NONE
null
null
null
### Feature request I have been trying to load 'amazon_us_dataset" but unable to do so. `amazon_us_reviews = load_dataset('amazon_us_reviews')` `print(amazon_us_reviews)` > [ValueError: Config name is missing. Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02'] Example of usage: `load_dataset('amazon_us_reviews', 'Wireless_v1_00')`] __________________________________________________________________________ `amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00') print(amazon_us_reviews)` **ERROR** `Generating` train split: 0% 0/960872 [00:00<?, ? examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1692 ) -> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record 1694 writer.write(example, key) 11 frames KeyError: 'marketplace' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1711 e = e.__context__ -> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1713 1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ### Motivation The dataset I'm using https://huggingface.co/datasets/amazon_us_reviews ### Your contribution What is the best way to load this data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6099/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6098/comments
https://api.github.com/repos/huggingface/datasets/issues/6098/events
https://github.com/huggingface/datasets/pull/6098
1,827,655,071
PR_kwDODunzps5WuCn1
6,098
Expanduser in save_to_disk()
{ "login": "Unknown3141592", "id": 51715864, "node_id": "MDQ6VXNlcjUxNzE1ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/51715864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Unknown3141592", "html_url": "https://github.com/Unknown3141592", "followers_url": "https://api.github.com/users/Unknown3141592/followers", "following_url": "https://api.github.com/users/Unknown3141592/following{/other_user}", "gists_url": "https://api.github.com/users/Unknown3141592/gists{/gist_id}", "starred_url": "https://api.github.com/users/Unknown3141592/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Unknown3141592/subscriptions", "organizations_url": "https://api.github.com/users/Unknown3141592/orgs", "repos_url": "https://api.github.com/users/Unknown3141592/repos", "events_url": "https://api.github.com/users/Unknown3141592/events{/privacy}", "received_events_url": "https://api.github.com/users/Unknown3141592/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-29T20:50:45
2023-07-29T20:58:57
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6098", "html_url": "https://github.com/huggingface/datasets/pull/6098", "diff_url": "https://github.com/huggingface/datasets/pull/6098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6098.patch", "merged_at": null }
Fixes #5651. The same problem occurs when loading from disk so I fixed it there too. I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6098/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6097/comments
https://api.github.com/repos/huggingface/datasets/issues/6097/events
https://github.com/huggingface/datasets/issues/6097
1,827,054,143
I_kwDODunzps5s5qI_
6,097
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
{ "login": "aschoenauer-sebag", "id": 2538048, "node_id": "MDQ6VXNlcjI1MzgwNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aschoenauer-sebag", "html_url": "https://github.com/aschoenauer-sebag", "followers_url": "https://api.github.com/users/aschoenauer-sebag/followers", "following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}", "gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}", "starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions", "organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs", "repos_url": "https://api.github.com/users/aschoenauer-sebag/repos", "events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}", "received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it." ]
2023-07-28T20:31:59
2023-07-28T20:49:58
2023-07-28T20:49:58
NONE
null
null
null
### Describe the bug Hi team! I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc. Are you able to reproduce what I observe? ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature: ``` {'vectors': array([[random values ...]])} ``` ### Expected behavior The expected behavior happens when the `set_format` method is not called: ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) # foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere ``` {'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']} ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6097/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6096/comments
https://api.github.com/repos/huggingface/datasets/issues/6096/events
https://github.com/huggingface/datasets/pull/6096
1,826,731,091
PR_kwDODunzps5Wq9Hb
6,096
Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6096). All of your documentation changes will be reflected on that endpoint." ]
2023-07-28T16:36:59
2023-07-31T13:12:52
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6096", "html_url": "https://github.com/huggingface/datasets/pull/6096", "diff_url": "https://github.com/huggingface/datasets/pull/6096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6096.patch", "merged_at": null }
Hi to whoever is reading this! 🤗 (Most likely @mariosasko) ## What's in this PR? This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as requested at #6086. ## What's missing in this PR? As per `to_json`, `to_csv`, and `to_parquet` docstrings for the recently included `storage_options` arg, I've scoped it to 2.15.0, so we should check that before merging in case we want to scope that for 2.14.2 instead. Additionally, should we also add `fsspec` support for the `from_csv`, `from_json`, and `from_parquet` methods? If you want me to do so @mariosasko just let me know and I'll create another PR to support that too!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6096/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6095/comments
https://api.github.com/repos/huggingface/datasets/issues/6095/events
https://github.com/huggingface/datasets/pull/6095
1,826,496,967
PR_kwDODunzps5WqJtr
6,095
Fix deprecation of errors in TextConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012497 / 0.011353 (0.001144) | 0.005355 / 0.011008 (-0.005654) | 0.106018 / 0.038508 (0.067510) | 0.093069 / 0.023109 (0.069960) | 0.394699 / 0.275898 (0.118801) | 0.449723 / 0.323480 (0.126243) | 0.006434 / 0.007986 (-0.001552) | 0.004187 / 0.004328 (-0.000141) | 0.079620 / 0.004250 (0.075370) | 0.062513 / 0.037052 (0.025460) | 0.410305 / 0.258489 (0.151816) | 0.467231 / 0.293841 (0.173390) | 0.048130 / 0.128546 (-0.080416) | 0.013747 / 0.075646 (-0.061899) | 0.357979 / 0.419271 (-0.061293) | 0.064764 / 0.043533 (0.021231) | 0.411029 / 0.255139 (0.155890) | 0.454734 / 0.283200 (0.171534) | 0.037215 / 0.141683 (-0.104468) | 1.801331 / 1.452155 (0.349176) | 1.951628 / 1.492716 (0.458912) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231073 / 0.018006 (0.213067) | 0.564179 / 0.000490 (0.563689) | 0.000947 / 0.000200 (0.000747) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030629 / 0.037411 (-0.006783) | 0.092522 / 0.014526 (0.077996) | 0.109781 / 0.176557 (-0.066775) | 0.183185 / 0.737135 (-0.553950) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.600095 / 0.215209 (0.384886) | 6.072868 / 2.077655 (3.995213) | 2.684109 / 1.504120 (1.179989) | 2.436204 / 1.541195 (0.895010) | 2.514667 / 1.468490 (1.046177) | 0.865455 / 4.584777 (-3.719322) | 5.245561 / 3.745712 (1.499849) | 5.628688 / 5.269862 (0.358826) | 3.457343 / 4.565676 (-1.108333) | 0.107563 / 0.424275 (-0.316712) | 0.008803 / 0.007607 (0.001196) | 0.754014 / 0.226044 (0.527970) | 7.341226 / 2.268929 (5.072297) | 3.482090 / 55.444624 (-51.962534) | 2.726071 / 6.876477 (-4.150406) | 3.168494 / 2.142072 (1.026422) | 1.023517 / 4.805227 (-3.781710) | 0.207440 / 6.500664 (-6.293224) | 0.073642 / 0.075469 (-0.001827) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.588636 / 1.841788 (-0.253152) | 23.305257 / 8.074308 (15.230949) | 22.071476 / 10.191392 (11.880084) | 0.242044 / 0.680424 (-0.438379) | 0.028830 / 0.534201 (-0.505371) | 0.461414 / 0.579283 (-0.117869) | 0.591024 / 0.434364 (0.156660) | 0.548984 / 0.540337 (0.008646) | 0.783318 / 1.386936 (-0.603618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008724 / 0.011353 (-0.002629) | 0.004638 / 0.011008 (-0.006371) | 0.081024 / 0.038508 (0.042516) | 0.077533 / 0.023109 (0.054423) | 0.444827 / 0.275898 (0.168929) | 0.507812 / 0.323480 (0.184332) | 0.006017 / 0.007986 (-0.001968) | 0.004204 / 0.004328 (-0.000124) | 0.082154 / 0.004250 (0.077904) | 0.063818 / 0.037052 (0.026765) | 0.463468 / 0.258489 (0.204979) | 0.536784 / 0.293841 (0.242943) | 0.046393 / 0.128546 (-0.082153) | 0.014349 / 0.075646 (-0.061298) | 0.089213 / 0.419271 (-0.330059) | 0.058313 / 0.043533 (0.014780) | 0.463674 / 0.255139 (0.208535) | 0.495865 / 0.283200 (0.212665) | 0.036586 / 0.141683 (-0.105096) | 1.801601 / 1.452155 (0.349447) | 1.871219 / 1.492716 (0.378502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273411 / 0.018006 (0.255405) | 0.531745 / 0.000490 (0.531255) | 0.000424 / 0.000200 (0.000224) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037689 / 0.037411 (0.000278) | 0.109544 / 0.014526 (0.095019) | 0.124053 / 0.176557 (-0.052504) | 0.179960 / 0.737135 (-0.557175) | 0.118218 / 0.296338 (-0.178120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639859 / 0.215209 (0.424650) | 6.347385 / 2.077655 (4.269730) | 2.910188 / 1.504120 (1.406068) | 2.698821 / 1.541195 (1.157626) | 2.802652 / 1.468490 (1.334161) | 0.816109 / 4.584777 (-3.768668) | 5.190313 / 3.745712 (1.444601) | 4.642684 / 5.269862 (-0.627178) | 2.948092 / 4.565676 (-1.617584) | 0.095877 / 0.424275 (-0.328398) | 0.009631 / 0.007607 (0.002024) | 0.779136 / 0.226044 (0.553091) | 7.611586 / 2.268929 (5.342658) | 3.760804 / 55.444624 (-51.683820) | 3.139355 / 6.876477 (-3.737122) | 3.419660 / 2.142072 (1.277587) | 1.036397 / 4.805227 (-3.768831) | 0.224015 / 6.500664 (-6.276649) | 0.084037 / 0.075469 (0.008568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.710608 / 1.841788 (-0.131179) | 24.447646 / 8.074308 (16.373338) | 21.345322 / 10.191392 (11.153930) | 0.232383 / 0.680424 (-0.448040) | 0.026381 / 0.534201 (-0.507820) | 0.475995 / 0.579283 (-0.103289) | 0.611939 / 0.434364 (0.177575) | 0.541441 / 0.540337 (0.001104) | 0.742796 / 1.386936 (-0.644140) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7929929525e734f7232cfc68d1d22fb8d53c54a3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006140 / 0.011353 (-0.005213) | 0.003664 / 0.011008 (-0.007344) | 0.080765 / 0.038508 (0.042257) | 0.065009 / 0.023109 (0.041900) | 0.312787 / 0.275898 (0.036889) | 0.354637 / 0.323480 (0.031157) | 0.004846 / 0.007986 (-0.003140) | 0.003019 / 0.004328 (-0.001310) | 0.062823 / 0.004250 (0.058573) | 0.050446 / 0.037052 (0.013394) | 0.314478 / 0.258489 (0.055989) | 0.360206 / 0.293841 (0.066365) | 0.027282 / 0.128546 (-0.101265) | 0.008024 / 0.075646 (-0.067622) | 0.262125 / 0.419271 (-0.157146) | 0.045793 / 0.043533 (0.002260) | 0.310508 / 0.255139 (0.055369) | 0.340899 / 0.283200 (0.057699) | 0.021850 / 0.141683 (-0.119833) | 1.510791 / 1.452155 (0.058636) | 1.570661 / 1.492716 (0.077944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.449310 / 0.000490 (0.448820) | 0.004556 / 0.000200 (0.004356) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023689 / 0.037411 (-0.013722) | 0.076316 / 0.014526 (0.061791) | 0.084800 / 0.176557 (-0.091757) | 0.153154 / 0.737135 (-0.583981) | 0.086467 / 0.296338 (-0.209871) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432254 / 0.215209 (0.217045) | 4.305098 / 2.077655 (2.227443) | 2.304267 / 1.504120 (0.800147) | 2.139503 / 1.541195 (0.598309) | 2.220414 / 1.468490 (0.751924) | 0.498595 / 4.584777 (-4.086182) | 3.058593 / 3.745712 (-0.687119) | 4.324501 / 5.269862 (-0.945361) | 2.667731 / 4.565676 (-1.897946) | 0.059917 / 0.424275 (-0.364358) | 0.006829 / 0.007607 (-0.000778) | 0.504608 / 0.226044 (0.278564) | 5.044480 / 2.268929 (2.775552) | 2.753080 / 55.444624 (-52.691545) | 2.449265 / 6.876477 (-4.427212) | 2.635113 / 2.142072 (0.493040) | 0.590760 / 4.805227 (-4.214467) | 0.130133 / 6.500664 (-6.370532) | 0.062759 / 0.075469 (-0.012710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267014 / 1.841788 (-0.574773) | 18.562890 / 8.074308 (10.488581) | 13.991257 / 10.191392 (3.799865) | 0.147108 / 0.680424 (-0.533315) | 0.017216 / 0.534201 (-0.516985) | 0.330317 / 0.579283 (-0.248966) | 0.351328 / 0.434364 (-0.083036) | 0.381097 / 0.540337 (-0.159241) | 0.558718 / 1.386936 (-0.828218) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006385 / 0.011353 (-0.004967) | 0.003668 / 0.011008 (-0.007340) | 0.062581 / 0.038508 (0.024073) | 0.067006 / 0.023109 (0.043896) | 0.428465 / 0.275898 (0.152567) | 0.466106 / 0.323480 (0.142626) | 0.005806 / 0.007986 (-0.002180) | 0.003117 / 0.004328 (-0.001212) | 0.063554 / 0.004250 (0.059303) | 0.054404 / 0.037052 (0.017352) | 0.431168 / 0.258489 (0.172679) | 0.467578 / 0.293841 (0.173737) | 0.027779 / 0.128546 (-0.100767) | 0.008055 / 0.075646 (-0.067592) | 0.067718 / 0.419271 (-0.351554) | 0.043042 / 0.043533 (-0.000491) | 0.425926 / 0.255139 (0.170787) | 0.453699 / 0.283200 (0.170500) | 0.023495 / 0.141683 (-0.118187) | 1.435356 / 1.452155 (-0.016799) | 1.509340 / 1.492716 (0.016624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242322 / 0.018006 (0.224316) | 0.446865 / 0.000490 (0.446376) | 0.001079 / 0.000200 (0.000879) | 0.000065 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025376 / 0.037411 (-0.012035) | 0.079373 / 0.014526 (0.064847) | 0.088554 / 0.176557 (-0.088002) | 0.141026 / 0.737135 (-0.596109) | 0.090666 / 0.296338 (-0.205672) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434800 / 0.215209 (0.219590) | 4.314491 / 2.077655 (2.236836) | 2.320688 / 1.504120 (0.816568) | 2.163941 / 1.541195 (0.622747) | 2.292576 / 1.468490 (0.824086) | 0.500226 / 4.584777 (-4.084551) | 3.114604 / 3.745712 (-0.631108) | 4.206997 / 5.269862 (-1.062864) | 2.461126 / 4.565676 (-2.104551) | 0.057717 / 0.424275 (-0.366558) | 0.006989 / 0.007607 (-0.000618) | 0.515623 / 0.226044 (0.289579) | 5.155301 / 2.268929 (2.886372) | 2.733589 / 55.444624 (-52.711035) | 2.542111 / 6.876477 (-4.334366) | 2.697035 / 2.142072 (0.554963) | 0.594213 / 4.805227 (-4.211014) | 0.128537 / 6.500664 (-6.372127) | 0.065223 / 0.075469 (-0.010246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306738 / 1.841788 (-0.535050) | 19.065370 / 8.074308 (10.991062) | 14.242096 / 10.191392 (4.050704) | 0.146177 / 0.680424 (-0.534246) | 0.017186 / 0.534201 (-0.517015) | 0.337224 / 0.579283 (-0.242059) | 0.349997 / 0.434364 (-0.084367) | 0.390408 / 0.540337 (-0.149930) | 0.524597 / 1.386936 (-0.862339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69ec36948b0ef1f194e9dcd43ec53a50b7708962 \"CML watermark\")\n" ]
2023-07-28T14:08:37
2023-07-31T05:26:32
2023-07-31T05:17:38
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6095", "html_url": "https://github.com/huggingface/datasets/pull/6095", "diff_url": "https://github.com/huggingface/datasets/pull/6095.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6095.patch", "merged_at": "2023-07-31T05:17:38" }
This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by: - #5974 ```python In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict") --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-701c27131a5d> in <module> ----> 1 ds = load_dataset("text", data_files="test.txt", errors="strict") ~/huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, ~/huggingface/datasets/src/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1830 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1831 # Instantiate the dataset builder -> 1832 builder_instance: DatasetBuilder = builder_cls( 1833 cache_dir=cache_dir, 1834 dataset_name=dataset_name, ~/huggingface/datasets/src/datasets/builder.py in __init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs) 371 if data_dir is not None: 372 config_kwargs["data_dir"] = data_dir --> 373 self.config, self.config_id = self._create_builder_config( 374 config_name=config_name, 375 custom_features=features, ~/huggingface/datasets/src/datasets/builder.py in _create_builder_config(self, config_name, custom_features, **config_kwargs) 550 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 551 config_kwargs["version"] = self.VERSION --> 552 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 553 554 # otherwise use the config_kwargs to overwrite the attributes TypeError: __init__() got an unexpected keyword argument 'errors' ``` Similar to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6095/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6094/comments
https://api.github.com/repos/huggingface/datasets/issues/6094/events
https://github.com/huggingface/datasets/pull/6094
1,826,293,414
PR_kwDODunzps5WpdpA
6,094
Fix deprecation of use_auth_token in DownloadConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008996 / 0.011353 (-0.002357) | 0.004976 / 0.011008 (-0.006033) | 0.114495 / 0.038508 (0.075987) | 0.083958 / 0.023109 (0.060849) | 0.408395 / 0.275898 (0.132497) | 0.456757 / 0.323480 (0.133278) | 0.006396 / 0.007986 (-0.001589) | 0.004315 / 0.004328 (-0.000014) | 0.093558 / 0.004250 (0.089307) | 0.062067 / 0.037052 (0.025014) | 0.423452 / 0.258489 (0.164963) | 0.463947 / 0.293841 (0.170106) | 0.049934 / 0.128546 (-0.078613) | 0.013937 / 0.075646 (-0.061709) | 0.365809 / 0.419271 (-0.053463) | 0.067382 / 0.043533 (0.023849) | 0.418860 / 0.255139 (0.163721) | 0.463264 / 0.283200 (0.180065) | 0.034392 / 0.141683 (-0.107291) | 1.870685 / 1.452155 (0.418530) | 1.975313 / 1.492716 (0.482597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261748 / 0.018006 (0.243742) | 0.645510 / 0.000490 (0.645020) | 0.000376 / 0.000200 (0.000176) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032129 / 0.037411 (-0.005282) | 0.104309 / 0.014526 (0.089783) | 0.113154 / 0.176557 (-0.063403) | 0.186795 / 0.737135 (-0.550341) | 0.115584 / 0.296338 (-0.180755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577755 / 0.215209 (0.362546) | 5.984988 / 2.077655 (3.907333) | 2.581967 / 1.504120 (1.077848) | 2.305744 / 1.541195 (0.764549) | 2.359618 / 1.468490 (0.891128) | 0.882892 / 4.584777 (-3.701885) | 5.755578 / 3.745712 (2.009866) | 8.718373 / 5.269862 (3.448511) | 5.217586 / 4.565676 (0.651909) | 0.099785 / 0.424275 (-0.324490) | 0.009008 / 0.007607 (0.001401) | 0.730937 / 0.226044 (0.504892) | 7.265309 / 2.268929 (4.996381) | 3.487167 / 55.444624 (-51.957457) | 2.750090 / 6.876477 (-4.126386) | 3.060198 / 2.142072 (0.918125) | 1.069945 / 4.805227 (-3.735282) | 0.227143 / 6.500664 (-6.273521) | 0.083601 / 0.075469 (0.008132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.754375 / 1.841788 (-0.087412) | 25.448731 / 8.074308 (17.374423) | 22.385943 / 10.191392 (12.194551) | 0.249921 / 0.680424 (-0.430503) | 0.034138 / 0.534201 (-0.500063) | 0.535170 / 0.579283 (-0.044113) | 0.605474 / 0.434364 (0.171110) | 0.580025 / 0.540337 (0.039688) | 0.810537 / 1.386936 (-0.576399) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005029 / 0.011008 (-0.005979) | 0.082200 / 0.038508 (0.043691) | 0.082386 / 0.023109 (0.059277) | 0.491869 / 0.275898 (0.215971) | 0.546735 / 0.323480 (0.223255) | 0.006893 / 0.007986 (-0.001093) | 0.004571 / 0.004328 (0.000243) | 0.085361 / 0.004250 (0.081111) | 0.063342 / 0.037052 (0.026290) | 0.522522 / 0.258489 (0.264033) | 0.560784 / 0.293841 (0.266943) | 0.047685 / 0.128546 (-0.080861) | 0.017741 / 0.075646 (-0.057905) | 0.098204 / 0.419271 (-0.321067) | 0.062919 / 0.043533 (0.019386) | 0.504005 / 0.255139 (0.248866) | 0.547022 / 0.283200 (0.263823) | 0.033731 / 0.141683 (-0.107952) | 1.869765 / 1.452155 (0.417610) | 1.935867 / 1.492716 (0.443151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304756 / 0.018006 (0.286750) | 0.623647 / 0.000490 (0.623157) | 0.000508 / 0.000200 (0.000308) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.043627 / 0.037411 (0.006216) | 0.107183 / 0.014526 (0.092657) | 0.119304 / 0.176557 (-0.057253) | 0.192651 / 0.737135 (-0.544485) | 0.125118 / 0.296338 (-0.171221) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669980 / 0.215209 (0.454771) | 6.566068 / 2.077655 (4.488413) | 3.136271 / 1.504120 (1.632152) | 2.964643 / 1.541195 (1.423448) | 2.936772 / 1.468490 (1.468282) | 0.885205 / 4.584777 (-3.699572) | 5.539062 / 3.745712 (1.793350) | 5.006133 / 5.269862 (-0.263729) | 3.313697 / 4.565676 (-1.251979) | 0.102975 / 0.424275 (-0.321301) | 0.010759 / 0.007607 (0.003152) | 0.791176 / 0.226044 (0.565132) | 7.822195 / 2.268929 (5.553266) | 3.982315 / 55.444624 (-51.462309) | 3.357026 / 6.876477 (-3.519451) | 3.561307 / 2.142072 (1.419234) | 1.056966 / 4.805227 (-3.748261) | 0.220476 / 6.500664 (-6.280188) | 0.090535 / 0.075469 (0.015066) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.897984 / 1.841788 (0.056196) | 26.411411 / 8.074308 (18.337103) | 22.951939 / 10.191392 (12.760547) | 0.216091 / 0.680424 (-0.464333) | 0.037005 / 0.534201 (-0.497196) | 0.505585 / 0.579283 (-0.073698) | 0.617794 / 0.434364 (0.183430) | 0.604631 / 0.540337 (0.064293) | 0.826356 / 1.386936 (-0.560580) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca6342c0177adc3a1d114740444e207b8525ed6e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006850 / 0.011353 (-0.004503) | 0.004062 / 0.011008 (-0.006947) | 0.086587 / 0.038508 (0.048079) | 0.079587 / 0.023109 (0.056478) | 0.353601 / 0.275898 (0.077702) | 0.396399 / 0.323480 (0.072919) | 0.004182 / 0.007986 (-0.003804) | 0.004445 / 0.004328 (0.000117) | 0.065100 / 0.004250 (0.060849) | 0.057386 / 0.037052 (0.020334) | 0.356945 / 0.258489 (0.098456) | 0.407093 / 0.293841 (0.113252) | 0.031949 / 0.128546 (-0.096597) | 0.008525 / 0.075646 (-0.067121) | 0.291310 / 0.419271 (-0.127961) | 0.053638 / 0.043533 (0.010105) | 0.359381 / 0.255139 (0.104242) | 0.399473 / 0.283200 (0.116273) | 0.025880 / 0.141683 (-0.115803) | 1.487604 / 1.452155 (0.035449) | 1.550528 / 1.492716 (0.057812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201106 / 0.018006 (0.183099) | 0.457538 / 0.000490 (0.457048) | 0.003995 / 0.000200 (0.003795) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030365 / 0.037411 (-0.007046) | 0.088064 / 0.014526 (0.073538) | 0.096432 / 0.176557 (-0.080124) | 0.158063 / 0.737135 (-0.579072) | 0.098258 / 0.296338 (-0.198080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405351 / 0.215209 (0.190142) | 4.032639 / 2.077655 (1.954984) | 2.018357 / 1.504120 (0.514237) | 1.848493 / 1.541195 (0.307298) | 1.929401 / 1.468490 (0.460910) | 0.488729 / 4.584777 (-4.096048) | 3.586114 / 3.745712 (-0.159598) | 5.279054 / 5.269862 (0.009193) | 3.113275 / 4.565676 (-1.452402) | 0.057373 / 0.424275 (-0.366902) | 0.007416 / 0.007607 (-0.000191) | 0.485514 / 0.226044 (0.259470) | 4.854389 / 2.268929 (2.585461) | 2.493113 / 55.444624 (-52.951512) | 2.128836 / 6.876477 (-4.747641) | 2.383669 / 2.142072 (0.241596) | 0.588266 / 4.805227 (-4.216962) | 0.133603 / 6.500664 (-6.367061) | 0.061812 / 0.075469 (-0.013657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260841 / 1.841788 (-0.580947) | 20.086954 / 8.074308 (12.012646) | 14.620932 / 10.191392 (4.429540) | 0.161525 / 0.680424 (-0.518899) | 0.018102 / 0.534201 (-0.516099) | 0.393810 / 0.579283 (-0.185473) | 0.406974 / 0.434364 (-0.027390) | 0.462732 / 0.540337 (-0.077606) | 0.634221 / 1.386936 (-0.752715) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004068 / 0.011008 (-0.006940) | 0.068009 / 0.038508 (0.029501) | 0.081298 / 0.023109 (0.058189) | 0.363531 / 0.275898 (0.087633) | 0.408482 / 0.323480 (0.085002) | 0.005601 / 0.007986 (-0.002384) | 0.003385 / 0.004328 (-0.000943) | 0.068043 / 0.004250 (0.063792) | 0.059739 / 0.037052 (0.022687) | 0.374043 / 0.258489 (0.115553) | 0.407219 / 0.293841 (0.113378) | 0.031194 / 0.128546 (-0.097352) | 0.008630 / 0.075646 (-0.067017) | 0.073755 / 0.419271 (-0.345517) | 0.049831 / 0.043533 (0.006298) | 0.363664 / 0.255139 (0.108525) | 0.381515 / 0.283200 (0.098315) | 0.026331 / 0.141683 (-0.115352) | 1.507771 / 1.452155 (0.055617) | 1.554403 / 1.492716 (0.061686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226309 / 0.018006 (0.208302) | 0.452428 / 0.000490 (0.451938) | 0.000937 / 0.000200 (0.000737) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031899 / 0.037411 (-0.005513) | 0.092090 / 0.014526 (0.077564) | 0.100838 / 0.176557 (-0.075718) | 0.153722 / 0.737135 (-0.583413) | 0.101950 / 0.296338 (-0.194389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417879 / 0.215209 (0.202669) | 4.171939 / 2.077655 (2.094284) | 2.312937 / 1.504120 (0.808817) | 2.209991 / 1.541195 (0.668796) | 2.329469 / 1.468490 (0.860979) | 0.484576 / 4.584777 (-4.100201) | 3.659198 / 3.745712 (-0.086514) | 5.255227 / 5.269862 (-0.014634) | 3.047430 / 4.565676 (-1.518247) | 0.057029 / 0.424275 (-0.367246) | 0.007735 / 0.007607 (0.000127) | 0.499962 / 0.226044 (0.273918) | 4.991655 / 2.268929 (2.722727) | 2.755999 / 55.444624 (-52.688625) | 2.374034 / 6.876477 (-4.502443) | 2.599759 / 2.142072 (0.457687) | 0.600319 / 4.805227 (-4.204908) | 0.146176 / 6.500664 (-6.354488) | 0.062328 / 0.075469 (-0.013142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346065 / 1.841788 (-0.495722) | 20.430343 / 8.074308 (12.356035) | 14.632959 / 10.191392 (4.441567) | 0.167007 / 0.680424 (-0.513417) | 0.018588 / 0.534201 (-0.515613) | 0.396015 / 0.579283 (-0.183268) | 0.429384 / 0.434364 (-0.004980) | 0.467746 / 0.540337 (-0.072591) | 0.615166 / 1.386936 (-0.771770) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#289bcc2ae9bf98c9414b6846ae603178a1816d3f \"CML watermark\")\n" ]
2023-07-28T11:52:21
2023-07-31T05:08:41
2023-07-31T04:59:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6094", "html_url": "https://github.com/huggingface/datasets/pull/6094", "diff_url": "https://github.com/huggingface/datasets/pull/6094.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6094.patch", "merged_at": "2023-07-31T04:59:50" }
This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by: - #5996 ```python In [1]: from datasets import DownloadConfig In [2]: DownloadConfig(use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-41927b449e72> in <module> ----> 1 DownloadConfig(use_auth_token=False) TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ``` ```python In [1]: from datasets import get_dataset_config_names In [2]: get_dataset_config_names("squad", use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-4671992ead50> in <module> ----> 1 get_dataset_config_names("squad", use_auth_token=False) ~/huggingface/datasets/src/datasets/inspect.py in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 349 ``` 350 """ --> 351 dataset_module = dataset_module_factory( 352 path, 353 revision=revision, ~/huggingface/datasets/src/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1374 """ 1375 if download_config is None: -> 1376 download_config = DownloadConfig(**download_kwargs) 1377 download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) 1378 download_config.extract_compressed_file = True TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6094/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6093/comments
https://api.github.com/repos/huggingface/datasets/issues/6093/events
https://github.com/huggingface/datasets/pull/6093
1,826,210,490
PR_kwDODunzps5WpLfh
6,093
Deprecate `download_custom`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007498 / 0.011353 (-0.003855) | 0.004158 / 0.011008 (-0.006850) | 0.087568 / 0.038508 (0.049060) | 0.083265 / 0.023109 (0.060156) | 0.378505 / 0.275898 (0.102607) | 0.399025 / 0.323480 (0.075545) | 0.006173 / 0.007986 (-0.001813) | 0.003743 / 0.004328 (-0.000586) | 0.071958 / 0.004250 (0.067707) | 0.059323 / 0.037052 (0.022271) | 0.377084 / 0.258489 (0.118595) | 0.408358 / 0.293841 (0.114517) | 0.035191 / 0.128546 (-0.093356) | 0.009408 / 0.075646 (-0.066238) | 0.312587 / 0.419271 (-0.106685) | 0.058073 / 0.043533 (0.014540) | 0.381977 / 0.255139 (0.126838) | 0.395611 / 0.283200 (0.112411) | 0.024191 / 0.141683 (-0.117491) | 1.572735 / 1.452155 (0.120581) | 1.687186 / 1.492716 (0.194470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208886 / 0.018006 (0.190879) | 0.474625 / 0.000490 (0.474135) | 0.006261 / 0.000200 (0.006061) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031401 / 0.037411 (-0.006011) | 0.086433 / 0.014526 (0.071907) | 0.108405 / 0.176557 (-0.068152) | 0.174564 / 0.737135 (-0.562571) | 0.099932 / 0.296338 (-0.196407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407059 / 0.215209 (0.191850) | 4.102056 / 2.077655 (2.024401) | 1.975397 / 1.504120 (0.471277) | 1.807117 / 1.541195 (0.265922) | 1.908667 / 1.468490 (0.440177) | 0.525880 / 4.584777 (-4.058897) | 3.899639 / 3.745712 (0.153927) | 4.358664 / 5.269862 (-0.911198) | 2.586185 / 4.565676 (-1.979492) | 0.061967 / 0.424275 (-0.362308) | 0.007656 / 0.007607 (0.000049) | 0.504851 / 0.226044 (0.278807) | 5.004429 / 2.268929 (2.735500) | 2.515540 / 55.444624 (-52.929084) | 2.183142 / 6.876477 (-4.693334) | 2.369835 / 2.142072 (0.227763) | 0.623527 / 4.805227 (-4.181700) | 0.145105 / 6.500664 (-6.355559) | 0.063924 / 0.075469 (-0.011546) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472661 / 1.841788 (-0.369126) | 21.781655 / 8.074308 (13.707347) | 15.628820 / 10.191392 (5.437428) | 0.182342 / 0.680424 (-0.498082) | 0.021139 / 0.534201 (-0.513062) | 0.438610 / 0.579283 (-0.140673) | 0.451343 / 0.434364 (0.016979) | 0.563320 / 0.540337 (0.022983) | 0.740976 / 1.386936 (-0.645960) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007492 / 0.011353 (-0.003861) | 0.004429 / 0.011008 (-0.006579) | 0.068517 / 0.038508 (0.030008) | 0.078533 / 0.023109 (0.055424) | 0.383530 / 0.275898 (0.107632) | 0.435061 / 0.323480 (0.111581) | 0.005955 / 0.007986 (-0.002030) | 0.003645 / 0.004328 (-0.000683) | 0.068792 / 0.004250 (0.064541) | 0.062452 / 0.037052 (0.025399) | 0.408768 / 0.258489 (0.150279) | 0.438538 / 0.293841 (0.144697) | 0.032038 / 0.128546 (-0.096508) | 0.009196 / 0.075646 (-0.066450) | 0.074495 / 0.419271 (-0.344776) | 0.051322 / 0.043533 (0.007789) | 0.394458 / 0.255139 (0.139319) | 0.424763 / 0.283200 (0.141564) | 0.024890 / 0.141683 (-0.116793) | 1.568322 / 1.452155 (0.116167) | 1.703903 / 1.492716 (0.211187) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249630 / 0.018006 (0.231624) | 0.471412 / 0.000490 (0.470923) | 0.000435 / 0.000200 (0.000235) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033054 / 0.037411 (-0.004358) | 0.100150 / 0.014526 (0.085624) | 0.101704 / 0.176557 (-0.074853) | 0.164031 / 0.737135 (-0.573104) | 0.112497 / 0.296338 (-0.183841) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487150 / 0.215209 (0.271941) | 4.662335 / 2.077655 (2.584681) | 2.477285 / 1.504120 (0.973165) | 2.294033 / 1.541195 (0.752838) | 2.380143 / 1.468490 (0.911653) | 0.519182 / 4.584777 (-4.065595) | 3.983589 / 3.745712 (0.237877) | 3.669895 / 5.269862 (-1.599967) | 2.267147 / 4.565676 (-2.298529) | 0.063300 / 0.424275 (-0.360975) | 0.008839 / 0.007607 (0.001232) | 0.566766 / 0.226044 (0.340721) | 5.533475 / 2.268929 (3.264546) | 3.033412 / 55.444624 (-52.411212) | 2.701793 / 6.876477 (-4.174684) | 2.899444 / 2.142072 (0.757372) | 0.614236 / 4.805227 (-4.190991) | 0.139533 / 6.500664 (-6.361131) | 0.067537 / 0.075469 (-0.007932) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505572 / 1.841788 (-0.336216) | 22.859062 / 8.074308 (14.784754) | 15.044777 / 10.191392 (4.853385) | 0.169153 / 0.680424 (-0.511271) | 0.021027 / 0.534201 (-0.513174) | 0.447979 / 0.579283 (-0.131304) | 0.460676 / 0.434364 (0.026312) | 0.506327 / 0.540337 (-0.034010) | 0.737880 / 1.386936 (-0.649057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db7180eb7e3ebf52b9d1f2c6629db6d92d8a29ba \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003692 / 0.011008 (-0.007316) | 0.080606 / 0.038508 (0.042098) | 0.062014 / 0.023109 (0.038905) | 0.391886 / 0.275898 (0.115988) | 0.423978 / 0.323480 (0.100498) | 0.004968 / 0.007986 (-0.003017) | 0.002911 / 0.004328 (-0.001417) | 0.062867 / 0.004250 (0.058617) | 0.049493 / 0.037052 (0.012441) | 0.395656 / 0.258489 (0.137167) | 0.432406 / 0.293841 (0.138565) | 0.027242 / 0.128546 (-0.101304) | 0.007938 / 0.075646 (-0.067709) | 0.261703 / 0.419271 (-0.157569) | 0.045922 / 0.043533 (0.002389) | 0.391544 / 0.255139 (0.136405) | 0.417902 / 0.283200 (0.134703) | 0.021339 / 0.141683 (-0.120344) | 1.508391 / 1.452155 (0.056236) | 1.518970 / 1.492716 (0.026254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181159 / 0.018006 (0.163153) | 0.431402 / 0.000490 (0.430912) | 0.003849 / 0.000200 (0.003649) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024498 / 0.037411 (-0.012914) | 0.072758 / 0.014526 (0.058233) | 0.084910 / 0.176557 (-0.091646) | 0.148314 / 0.737135 (-0.588821) | 0.085212 / 0.296338 (-0.211126) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386693 / 0.215209 (0.171484) | 3.852652 / 2.077655 (1.774997) | 1.891758 / 1.504120 (0.387638) | 1.718793 / 1.541195 (0.177598) | 1.747595 / 1.468490 (0.279104) | 0.498593 / 4.584777 (-4.086184) | 3.057907 / 3.745712 (-0.687805) | 4.728449 / 5.269862 (-0.541413) | 2.966368 / 4.565676 (-1.599308) | 0.057538 / 0.424275 (-0.366737) | 0.006415 / 0.007607 (-0.001192) | 0.461652 / 0.226044 (0.235608) | 4.625944 / 2.268929 (2.357015) | 2.306938 / 55.444624 (-53.137686) | 1.974670 / 6.876477 (-4.901806) | 2.146327 / 2.142072 (0.004254) | 0.585033 / 4.805227 (-4.220195) | 0.125936 / 6.500664 (-6.374728) | 0.062365 / 0.075469 (-0.013104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263415 / 1.841788 (-0.578373) | 18.380651 / 8.074308 (10.306343) | 13.853410 / 10.191392 (3.662018) | 0.144674 / 0.680424 (-0.535749) | 0.016833 / 0.534201 (-0.517368) | 0.330812 / 0.579283 (-0.248471) | 0.357553 / 0.434364 (-0.076810) | 0.383529 / 0.540337 (-0.156809) | 0.558923 / 1.386936 (-0.828013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006074 / 0.011353 (-0.005278) | 0.003655 / 0.011008 (-0.007353) | 0.062981 / 0.038508 (0.024473) | 0.061457 / 0.023109 (0.038348) | 0.366471 / 0.275898 (0.090573) | 0.408463 / 0.323480 (0.084983) | 0.004854 / 0.007986 (-0.003132) | 0.002916 / 0.004328 (-0.001412) | 0.062745 / 0.004250 (0.058494) | 0.051136 / 0.037052 (0.014084) | 0.380313 / 0.258489 (0.121824) | 0.416945 / 0.293841 (0.123104) | 0.027228 / 0.128546 (-0.101318) | 0.008031 / 0.075646 (-0.067615) | 0.067941 / 0.419271 (-0.351331) | 0.042886 / 0.043533 (-0.000647) | 0.370112 / 0.255139 (0.114973) | 0.397111 / 0.283200 (0.113911) | 0.023063 / 0.141683 (-0.118620) | 1.476955 / 1.452155 (0.024800) | 1.534783 / 1.492716 (0.042066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231462 / 0.018006 (0.213456) | 0.439559 / 0.000490 (0.439069) | 0.000364 / 0.000200 (0.000164) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026925 / 0.037411 (-0.010486) | 0.079623 / 0.014526 (0.065097) | 0.088694 / 0.176557 (-0.087862) | 0.143163 / 0.737135 (-0.593972) | 0.089900 / 0.296338 (-0.206438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451429 / 0.215209 (0.236220) | 4.510723 / 2.077655 (2.433069) | 2.491853 / 1.504120 (0.987733) | 2.334670 / 1.541195 (0.793475) | 2.395519 / 1.468490 (0.927029) | 0.501369 / 4.584777 (-4.083408) | 3.014019 / 3.745712 (-0.731693) | 2.809199 / 5.269862 (-2.460662) | 1.842195 / 4.565676 (-2.723481) | 0.057675 / 0.424275 (-0.366600) | 0.006742 / 0.007607 (-0.000865) | 0.524402 / 0.226044 (0.298358) | 5.245296 / 2.268929 (2.976367) | 2.957990 / 55.444624 (-52.486634) | 2.649807 / 6.876477 (-4.226670) | 2.755909 / 2.142072 (0.613836) | 0.589610 / 4.805227 (-4.215617) | 0.125708 / 6.500664 (-6.374956) | 0.062237 / 0.075469 (-0.013232) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362758 / 1.841788 (-0.479030) | 18.343694 / 8.074308 (10.269386) | 13.621521 / 10.191392 (3.430129) | 0.128866 / 0.680424 (-0.551558) | 0.016608 / 0.534201 (-0.517593) | 0.333071 / 0.579283 (-0.246212) | 0.341917 / 0.434364 (-0.092447) | 0.381075 / 0.540337 (-0.159263) | 0.512485 / 1.386936 (-0.874451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab3f0165d4a2a8ab1aee1ebc4628893e17e27387 \"CML watermark\")\n", "I forgot to mention this in the initial comment, but only one public dataset (excluding gated) uses this method - `pg19`, which I just fixed.\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.004791 / 0.011008 (-0.006217) | 0.102596 / 0.038508 (0.064088) | 0.087678 / 0.023109 (0.064569) | 0.373858 / 0.275898 (0.097960) | 0.416643 / 0.323480 (0.093163) | 0.006147 / 0.007986 (-0.001839) | 0.003837 / 0.004328 (-0.000491) | 0.076706 / 0.004250 (0.072456) | 0.063449 / 0.037052 (0.026396) | 0.378392 / 0.258489 (0.119903) | 0.431768 / 0.293841 (0.137927) | 0.036648 / 0.128546 (-0.091898) | 0.010042 / 0.075646 (-0.065604) | 0.350277 / 0.419271 (-0.068995) | 0.062892 / 0.043533 (0.019359) | 0.376151 / 0.255139 (0.121012) | 0.420929 / 0.283200 (0.137729) | 0.027816 / 0.141683 (-0.113867) | 1.791607 / 1.452155 (0.339452) | 1.903045 / 1.492716 (0.410328) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224688 / 0.018006 (0.206682) | 0.491941 / 0.000490 (0.491451) | 0.004482 / 0.000200 (0.004282) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033495 / 0.037411 (-0.003917) | 0.099855 / 0.014526 (0.085329) | 0.114593 / 0.176557 (-0.061964) | 0.190947 / 0.737135 (-0.546189) | 0.116202 / 0.296338 (-0.180136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488581 / 0.215209 (0.273372) | 4.869531 / 2.077655 (2.791876) | 2.527920 / 1.504120 (1.023800) | 2.340021 / 1.541195 (0.798826) | 2.432661 / 1.468490 (0.964171) | 0.569646 / 4.584777 (-4.015131) | 4.392036 / 3.745712 (0.646324) | 4.987253 / 5.269862 (-0.282608) | 2.866604 / 4.565676 (-1.699073) | 0.067393 / 0.424275 (-0.356882) | 0.008759 / 0.007607 (0.001152) | 0.584327 / 0.226044 (0.358283) | 5.853000 / 2.268929 (3.584072) | 3.206721 / 55.444624 (-52.237904) | 2.730867 / 6.876477 (-4.145610) | 2.944814 / 2.142072 (0.802742) | 0.703336 / 4.805227 (-4.101891) | 0.173985 / 6.500664 (-6.326679) | 0.075333 / 0.075469 (-0.000137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.519755 / 1.841788 (-0.322033) | 22.918038 / 8.074308 (14.843730) | 17.211160 / 10.191392 (7.019768) | 0.196941 / 0.680424 (-0.483483) | 0.021833 / 0.534201 (-0.512368) | 0.476835 / 0.579283 (-0.102448) | 0.464513 / 0.434364 (0.030149) | 0.559180 / 0.540337 (0.018843) | 0.748232 / 1.386936 (-0.638704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008461 / 0.011353 (-0.002892) | 0.004799 / 0.011008 (-0.006209) | 0.077466 / 0.038508 (0.038958) | 0.103562 / 0.023109 (0.080453) | 0.453661 / 0.275898 (0.177763) | 0.531126 / 0.323480 (0.207647) | 0.006618 / 0.007986 (-0.001367) | 0.004048 / 0.004328 (-0.000280) | 0.075446 / 0.004250 (0.071196) | 0.072815 / 0.037052 (0.035762) | 0.497145 / 0.258489 (0.238656) | 0.533828 / 0.293841 (0.239987) | 0.037657 / 0.128546 (-0.090890) | 0.010139 / 0.075646 (-0.065507) | 0.083759 / 0.419271 (-0.335512) | 0.061401 / 0.043533 (0.017868) | 0.441785 / 0.255139 (0.186646) | 0.491678 / 0.283200 (0.208479) | 0.033100 / 0.141683 (-0.108583) | 1.753612 / 1.452155 (0.301458) | 1.838956 / 1.492716 (0.346240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.395023 / 0.018006 (0.377017) | 0.509362 / 0.000490 (0.508872) | 0.060742 / 0.000200 (0.060542) | 0.000545 / 0.000054 (0.000491) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039327 / 0.037411 (0.001916) | 0.117345 / 0.014526 (0.102819) | 0.124540 / 0.176557 (-0.052017) | 0.200743 / 0.737135 (-0.536392) | 0.126750 / 0.296338 (-0.169589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488597 / 0.215209 (0.273388) | 4.875534 / 2.077655 (2.797880) | 2.714364 / 1.504120 (1.210244) | 2.603707 / 1.541195 (1.062513) | 2.733547 / 1.468490 (1.265057) | 0.575183 / 4.584777 (-4.009594) | 4.126096 / 3.745712 (0.380384) | 3.853803 / 5.269862 (-1.416058) | 2.395160 / 4.565676 (-2.170516) | 0.067391 / 0.424275 (-0.356884) | 0.009108 / 0.007607 (0.001501) | 0.585865 / 0.226044 (0.359820) | 5.864878 / 2.268929 (3.595949) | 3.153369 / 55.444624 (-52.291256) | 2.759064 / 6.876477 (-4.117413) | 3.032489 / 2.142072 (0.890416) | 0.702615 / 4.805227 (-4.102613) | 0.160034 / 6.500664 (-6.340630) | 0.077294 / 0.075469 (0.001825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595069 / 1.841788 (-0.246719) | 23.231191 / 8.074308 (15.156883) | 16.365137 / 10.191392 (6.173745) | 0.188360 / 0.680424 (-0.492063) | 0.021704 / 0.534201 (-0.512497) | 0.469996 / 0.579283 (-0.109287) | 0.463255 / 0.434364 (0.028891) | 0.560506 / 0.540337 (0.020169) | 0.751006 / 1.386936 (-0.635930) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#50d9a70c666ff46ff9974c47cedc77d9f88d6471 \"CML watermark\")\n" ]
2023-07-28T10:49:06
2023-07-28T11:40:37
2023-07-28T11:30:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6093", "html_url": "https://github.com/huggingface/datasets/pull/6093", "diff_url": "https://github.com/huggingface/datasets/pull/6093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6093.patch", "merged_at": "2023-07-28T11:30:02" }
Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead. We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been requests to implement the streaming version of this method on the forum, but the reason for this seems to be a tip in the docs that "promotes" this method (this PR removes it).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6093/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6092/comments
https://api.github.com/repos/huggingface/datasets/issues/6092/events
https://github.com/huggingface/datasets/pull/6092
1,826,111,806
PR_kwDODunzps5Wo1mh
6,092
Minor fix in `iter_files` for hidden files
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007873 / 0.011353 (-0.003480) | 0.004585 / 0.011008 (-0.006423) | 0.101622 / 0.038508 (0.063114) | 0.092459 / 0.023109 (0.069350) | 0.365157 / 0.275898 (0.089259) | 0.405943 / 0.323480 (0.082463) | 0.006229 / 0.007986 (-0.001756) | 0.003811 / 0.004328 (-0.000518) | 0.073831 / 0.004250 (0.069580) | 0.065097 / 0.037052 (0.028045) | 0.378912 / 0.258489 (0.120423) | 0.422174 / 0.293841 (0.128333) | 0.036244 / 0.128546 (-0.092302) | 0.009677 / 0.075646 (-0.065970) | 0.345164 / 0.419271 (-0.074107) | 0.061632 / 0.043533 (0.018099) | 0.370350 / 0.255139 (0.115211) | 0.418245 / 0.283200 (0.135046) | 0.027272 / 0.141683 (-0.114411) | 1.774047 / 1.452155 (0.321892) | 1.880278 / 1.492716 (0.387562) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217238 / 0.018006 (0.199231) | 0.489560 / 0.000490 (0.489071) | 0.004013 / 0.000200 (0.003813) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034139 / 0.037411 (-0.003272) | 0.103831 / 0.014526 (0.089305) | 0.114353 / 0.176557 (-0.062204) | 0.182034 / 0.737135 (-0.555102) | 0.116171 / 0.296338 (-0.180168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448658 / 0.215209 (0.233449) | 4.520849 / 2.077655 (2.443195) | 2.216121 / 1.504120 (0.712001) | 2.034596 / 1.541195 (0.493402) | 2.193216 / 1.468490 (0.724725) | 0.568166 / 4.584777 (-4.016611) | 4.133587 / 3.745712 (0.387875) | 4.641117 / 5.269862 (-0.628744) | 2.772913 / 4.565676 (-1.792764) | 0.067664 / 0.424275 (-0.356611) | 0.008719 / 0.007607 (0.001112) | 0.547723 / 0.226044 (0.321678) | 5.438325 / 2.268929 (3.169397) | 2.877667 / 55.444624 (-52.566958) | 2.477503 / 6.876477 (-4.398974) | 2.688209 / 2.142072 (0.546136) | 0.692593 / 4.805227 (-4.112634) | 0.154549 / 6.500664 (-6.346115) | 0.073286 / 0.075469 (-0.002183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610927 / 1.841788 (-0.230861) | 23.413345 / 8.074308 (15.339037) | 16.851819 / 10.191392 (6.660427) | 0.170076 / 0.680424 (-0.510348) | 0.021428 / 0.534201 (-0.512773) | 0.468184 / 0.579283 (-0.111099) | 0.491820 / 0.434364 (0.057456) | 0.553453 / 0.540337 (0.013115) | 0.762303 / 1.386936 (-0.624633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008033 / 0.011353 (-0.003320) | 0.004638 / 0.011008 (-0.006370) | 0.077044 / 0.038508 (0.038536) | 0.096529 / 0.023109 (0.073420) | 0.428735 / 0.275898 (0.152837) | 0.477303 / 0.323480 (0.153823) | 0.006040 / 0.007986 (-0.001946) | 0.003808 / 0.004328 (-0.000521) | 0.076042 / 0.004250 (0.071791) | 0.066123 / 0.037052 (0.029071) | 0.445482 / 0.258489 (0.186993) | 0.481350 / 0.293841 (0.187509) | 0.036951 / 0.128546 (-0.091595) | 0.009944 / 0.075646 (-0.065703) | 0.082731 / 0.419271 (-0.336541) | 0.057490 / 0.043533 (0.013958) | 0.432668 / 0.255139 (0.177529) | 0.461146 / 0.283200 (0.177947) | 0.027330 / 0.141683 (-0.114353) | 1.784195 / 1.452155 (0.332040) | 1.834776 / 1.492716 (0.342059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254104 / 0.018006 (0.236097) | 0.475810 / 0.000490 (0.475321) | 0.000459 / 0.000200 (0.000259) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037058 / 0.037411 (-0.000353) | 0.114962 / 0.014526 (0.100436) | 0.123725 / 0.176557 (-0.052832) | 0.188885 / 0.737135 (-0.548251) | 0.125668 / 0.296338 (-0.170670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492627 / 0.215209 (0.277418) | 4.900625 / 2.077655 (2.822970) | 2.546349 / 1.504120 (1.042229) | 2.360350 / 1.541195 (0.819155) | 2.477975 / 1.468490 (1.009485) | 0.574042 / 4.584777 (-4.010735) | 4.408414 / 3.745712 (0.662702) | 3.836640 / 5.269862 (-1.433222) | 2.438450 / 4.565676 (-2.127227) | 0.067706 / 0.424275 (-0.356569) | 0.009165 / 0.007607 (0.001558) | 0.580313 / 0.226044 (0.354269) | 5.798211 / 2.268929 (3.529283) | 3.098480 / 55.444624 (-52.346145) | 2.740180 / 6.876477 (-4.136296) | 2.984548 / 2.142072 (0.842476) | 0.702550 / 4.805227 (-4.102677) | 0.158248 / 6.500664 (-6.342416) | 0.073999 / 0.075469 (-0.001470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.636034 / 1.841788 (-0.205754) | 24.068000 / 8.074308 (15.993692) | 17.123987 / 10.191392 (6.932595) | 0.210101 / 0.680424 (-0.470323) | 0.022555 / 0.534201 (-0.511646) | 0.509354 / 0.579283 (-0.069929) | 0.540739 / 0.434364 (0.106375) | 0.546048 / 0.540337 (0.005711) | 0.719155 / 1.386936 (-0.667781) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40530382ba98f54445de8820943b1236d4a4704f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007342 / 0.011353 (-0.004010) | 0.004579 / 0.011008 (-0.006429) | 0.087050 / 0.038508 (0.048542) | 0.089001 / 0.023109 (0.065892) | 0.307319 / 0.275898 (0.031421) | 0.377573 / 0.323480 (0.054093) | 0.006472 / 0.007986 (-0.001514) | 0.004287 / 0.004328 (-0.000041) | 0.067226 / 0.004250 (0.062976) | 0.063147 / 0.037052 (0.026094) | 0.314541 / 0.258489 (0.056052) | 0.369919 / 0.293841 (0.076078) | 0.031283 / 0.128546 (-0.097263) | 0.009175 / 0.075646 (-0.066471) | 0.289211 / 0.419271 (-0.130061) | 0.053444 / 0.043533 (0.009911) | 0.307308 / 0.255139 (0.052169) | 0.346221 / 0.283200 (0.063021) | 0.027948 / 0.141683 (-0.113735) | 1.475177 / 1.452155 (0.023022) | 1.575971 / 1.492716 (0.083255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291092 / 0.018006 (0.273086) | 0.696951 / 0.000490 (0.696461) | 0.005211 / 0.000200 (0.005011) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031787 / 0.037411 (-0.005625) | 0.084382 / 0.014526 (0.069857) | 0.106474 / 0.176557 (-0.070083) | 0.161472 / 0.737135 (-0.575663) | 0.108650 / 0.296338 (-0.187688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379656 / 0.215209 (0.164447) | 3.784072 / 2.077655 (1.706417) | 1.826580 / 1.504120 (0.322460) | 1.654916 / 1.541195 (0.113721) | 1.730698 / 1.468490 (0.262208) | 0.478003 / 4.584777 (-4.106774) | 3.564920 / 3.745712 (-0.180792) | 5.824873 / 5.269862 (0.555012) | 3.454563 / 4.565676 (-1.111113) | 0.056646 / 0.424275 (-0.367629) | 0.007410 / 0.007607 (-0.000197) | 0.461781 / 0.226044 (0.235737) | 4.600928 / 2.268929 (2.331999) | 2.351887 / 55.444624 (-53.092738) | 1.986470 / 6.876477 (-4.890007) | 2.311623 / 2.142072 (0.169551) | 0.571247 / 4.805227 (-4.233980) | 0.132191 / 6.500664 (-6.368473) | 0.059943 / 0.075469 (-0.015526) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253142 / 1.841788 (-0.588646) | 21.294983 / 8.074308 (13.220675) | 14.522429 / 10.191392 (4.331037) | 0.166663 / 0.680424 (-0.513761) | 0.019694 / 0.534201 (-0.514507) | 0.395908 / 0.579283 (-0.183375) | 0.413283 / 0.434364 (-0.021081) | 0.457739 / 0.540337 (-0.082599) | 0.664361 / 1.386936 (-0.722575) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007228 / 0.011353 (-0.004124) | 0.004941 / 0.011008 (-0.006067) | 0.065381 / 0.038508 (0.026873) | 0.090790 / 0.023109 (0.067681) | 0.391315 / 0.275898 (0.115417) | 0.416518 / 0.323480 (0.093038) | 0.007015 / 0.007986 (-0.000970) | 0.004417 / 0.004328 (0.000089) | 0.067235 / 0.004250 (0.062985) | 0.068092 / 0.037052 (0.031039) | 0.403031 / 0.258489 (0.144542) | 0.434013 / 0.293841 (0.140172) | 0.032004 / 0.128546 (-0.096542) | 0.009242 / 0.075646 (-0.066404) | 0.071222 / 0.419271 (-0.348050) | 0.054207 / 0.043533 (0.010674) | 0.386198 / 0.255139 (0.131059) | 0.404350 / 0.283200 (0.121150) | 0.036284 / 0.141683 (-0.105399) | 1.488814 / 1.452155 (0.036660) | 1.587785 / 1.492716 (0.095069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313760 / 0.018006 (0.295754) | 0.747778 / 0.000490 (0.747289) | 0.003307 / 0.000200 (0.003107) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034321 / 0.037411 (-0.003090) | 0.088266 / 0.014526 (0.073740) | 0.112874 / 0.176557 (-0.063682) | 0.171554 / 0.737135 (-0.565581) | 0.111356 / 0.296338 (-0.184982) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422624 / 0.215209 (0.207415) | 4.212079 / 2.077655 (2.134425) | 2.242742 / 1.504120 (0.738622) | 2.072555 / 1.541195 (0.531360) | 2.192648 / 1.468490 (0.724158) | 0.488214 / 4.584777 (-4.096563) | 3.597013 / 3.745712 (-0.148699) | 3.477556 / 5.269862 (-1.792305) | 2.184340 / 4.565676 (-2.381337) | 0.057170 / 0.424275 (-0.367105) | 0.007772 / 0.007607 (0.000165) | 0.499455 / 0.226044 (0.273411) | 4.988953 / 2.268929 (2.720024) | 2.797894 / 55.444624 (-52.646731) | 2.402215 / 6.876477 (-4.474262) | 2.725069 / 2.142072 (0.582997) | 0.596213 / 4.805227 (-4.209014) | 0.136564 / 6.500664 (-6.364100) | 0.061799 / 0.075469 (-0.013670) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.360739 / 1.841788 (-0.481049) | 21.846457 / 8.074308 (13.772149) | 14.568842 / 10.191392 (4.377450) | 0.168980 / 0.680424 (-0.511444) | 0.018795 / 0.534201 (-0.515406) | 0.396173 / 0.579283 (-0.183110) | 0.418651 / 0.434364 (-0.015713) | 0.480042 / 0.540337 (-0.060295) | 0.650803 / 1.386936 (-0.736133) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7d460304487d4daab0a64ca0ca707e896367ca1 \"CML watermark\")\n" ]
2023-07-28T09:50:12
2023-07-28T10:59:28
2023-07-28T10:50:10
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6092", "html_url": "https://github.com/huggingface/datasets/pull/6092", "diff_url": "https://github.com/huggingface/datasets/pull/6092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6092.patch", "merged_at": "2023-07-28T10:50:09" }
Fix #6090
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6092/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6091/comments
https://api.github.com/repos/huggingface/datasets/issues/6091/events
https://github.com/huggingface/datasets/pull/6091
1,826,086,487
PR_kwDODunzps5Wov9Q
6,091
Bump fsspec from 2021.11.1 to 2022.3.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004077 / 0.011008 (-0.006931) | 0.084905 / 0.038508 (0.046397) | 0.074004 / 0.023109 (0.050895) | 0.315968 / 0.275898 (0.040070) | 0.351594 / 0.323480 (0.028114) | 0.005623 / 0.007986 (-0.002362) | 0.003476 / 0.004328 (-0.000852) | 0.065089 / 0.004250 (0.060839) | 0.054683 / 0.037052 (0.017631) | 0.314983 / 0.258489 (0.056494) | 0.371776 / 0.293841 (0.077935) | 0.031727 / 0.128546 (-0.096819) | 0.008786 / 0.075646 (-0.066860) | 0.289905 / 0.419271 (-0.129367) | 0.053340 / 0.043533 (0.009807) | 0.311802 / 0.255139 (0.056663) | 0.351927 / 0.283200 (0.068727) | 0.024453 / 0.141683 (-0.117229) | 1.491727 / 1.452155 (0.039572) | 1.585027 / 1.492716 (0.092310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238757 / 0.018006 (0.220750) | 0.557691 / 0.000490 (0.557202) | 0.005158 / 0.000200 (0.004958) | 0.000204 / 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028435 / 0.037411 (-0.008977) | 0.082219 / 0.014526 (0.067693) | 0.096932 / 0.176557 (-0.079625) | 0.153802 / 0.737135 (-0.583333) | 0.098338 / 0.296338 (-0.198001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383448 / 0.215209 (0.168238) | 3.816074 / 2.077655 (1.738420) | 1.835111 / 1.504120 (0.330991) | 1.662326 / 1.541195 (0.121131) | 1.720202 / 1.468490 (0.251712) | 0.483107 / 4.584777 (-4.101669) | 3.648528 / 3.745712 (-0.097184) | 4.020929 / 5.269862 (-1.248932) | 2.433141 / 4.565676 (-2.132536) | 0.057081 / 0.424275 (-0.367194) | 0.007303 / 0.007607 (-0.000304) | 0.461366 / 0.226044 (0.235322) | 4.609090 / 2.268929 (2.340162) | 2.355940 / 55.444624 (-53.088684) | 1.989833 / 6.876477 (-4.886644) | 2.201451 / 2.142072 (0.059378) | 0.586156 / 4.805227 (-4.219071) | 0.133486 / 6.500664 (-6.367178) | 0.060062 / 0.075469 (-0.015407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247845 / 1.841788 (-0.593942) | 19.624252 / 8.074308 (11.549944) | 14.305975 / 10.191392 (4.114583) | 0.168687 / 0.680424 (-0.511737) | 0.018075 / 0.534201 (-0.516126) | 0.393859 / 0.579283 (-0.185424) | 0.407272 / 0.434364 (-0.027092) | 0.463760 / 0.540337 (-0.076578) | 0.629930 / 1.386936 (-0.757006) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006760 / 0.011353 (-0.004593) | 0.004345 / 0.011008 (-0.006663) | 0.064379 / 0.038508 (0.025871) | 0.078295 / 0.023109 (0.055186) | 0.364532 / 0.275898 (0.088633) | 0.395852 / 0.323480 (0.072372) | 0.005659 / 0.007986 (-0.002327) | 0.003515 / 0.004328 (-0.000813) | 0.065030 / 0.004250 (0.060780) | 0.059950 / 0.037052 (0.022898) | 0.375420 / 0.258489 (0.116931) | 0.411579 / 0.293841 (0.117738) | 0.031575 / 0.128546 (-0.096972) | 0.008737 / 0.075646 (-0.066910) | 0.070350 / 0.419271 (-0.348922) | 0.050607 / 0.043533 (0.007075) | 0.359785 / 0.255139 (0.104646) | 0.382638 / 0.283200 (0.099438) | 0.025533 / 0.141683 (-0.116150) | 1.564379 / 1.452155 (0.112225) | 1.620642 / 1.492716 (0.127925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212779 / 0.018006 (0.194773) | 0.563827 / 0.000490 (0.563337) | 0.003767 / 0.000200 (0.003567) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030275 / 0.037411 (-0.007136) | 0.088108 / 0.014526 (0.073582) | 0.102454 / 0.176557 (-0.074103) | 0.156107 / 0.737135 (-0.581028) | 0.103961 / 0.296338 (-0.192378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421395 / 0.215209 (0.206186) | 4.204935 / 2.077655 (2.127280) | 2.144929 / 1.504120 (0.640809) | 1.999341 / 1.541195 (0.458147) | 2.066966 / 1.468490 (0.598476) | 0.486135 / 4.584777 (-4.098642) | 3.628139 / 3.745712 (-0.117573) | 5.652683 / 5.269862 (0.382821) | 3.216721 / 4.565676 (-1.348956) | 0.057513 / 0.424275 (-0.366762) | 0.007553 / 0.007607 (-0.000055) | 0.494470 / 0.226044 (0.268426) | 4.949343 / 2.268929 (2.680414) | 2.654222 / 55.444624 (-52.790402) | 2.322257 / 6.876477 (-4.554220) | 2.555633 / 2.142072 (0.413561) | 0.588355 / 4.805227 (-4.216872) | 0.134481 / 6.500664 (-6.366183) | 0.062415 / 0.075469 (-0.013054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377578 / 1.841788 (-0.464209) | 19.805201 / 8.074308 (11.730893) | 14.128536 / 10.191392 (3.937144) | 0.164343 / 0.680424 (-0.516081) | 0.018553 / 0.534201 (-0.515648) | 0.398191 / 0.579283 (-0.181093) | 0.414268 / 0.434364 (-0.020096) | 0.462270 / 0.540337 (-0.078068) | 0.608497 / 1.386936 (-0.778439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3af05ba487f361fae90a4c80af72de5c4ed70162 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006966 / 0.011353 (-0.004387) | 0.004339 / 0.011008 (-0.006669) | 0.086682 / 0.038508 (0.048174) | 0.086143 / 0.023109 (0.063034) | 0.316106 / 0.275898 (0.040208) | 0.351422 / 0.323480 (0.027942) | 0.005916 / 0.007986 (-0.002069) | 0.003630 / 0.004328 (-0.000698) | 0.066980 / 0.004250 (0.062730) | 0.060031 / 0.037052 (0.022979) | 0.317487 / 0.258489 (0.058998) | 0.356280 / 0.293841 (0.062439) | 0.031816 / 0.128546 (-0.096730) | 0.008797 / 0.075646 (-0.066849) | 0.289848 / 0.419271 (-0.129424) | 0.055431 / 0.043533 (0.011898) | 0.318881 / 0.255139 (0.063742) | 0.332315 / 0.283200 (0.049116) | 0.025946 / 0.141683 (-0.115737) | 1.472904 / 1.452155 (0.020749) | 1.577973 / 1.492716 (0.085257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239056 / 0.018006 (0.221050) | 0.565406 / 0.000490 (0.564917) | 0.003606 / 0.000200 (0.003406) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029771 / 0.037411 (-0.007640) | 0.085534 / 0.014526 (0.071008) | 0.107008 / 0.176557 (-0.069548) | 0.631583 / 0.737135 (-0.105552) | 0.104210 / 0.296338 (-0.192128) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390675 / 0.215209 (0.175466) | 3.898746 / 2.077655 (1.821091) | 1.933048 / 1.504120 (0.428928) | 1.792162 / 1.541195 (0.250967) | 1.958045 / 1.468490 (0.489555) | 0.488632 / 4.584777 (-4.096144) | 3.696306 / 3.745712 (-0.049406) | 3.454600 / 5.269862 (-1.815262) | 2.176292 / 4.565676 (-2.389385) | 0.057617 / 0.424275 (-0.366658) | 0.007603 / 0.007607 (-0.000004) | 0.467843 / 0.226044 (0.241798) | 4.672928 / 2.268929 (2.404000) | 2.441096 / 55.444624 (-53.003529) | 2.133506 / 6.876477 (-4.742970) | 2.431167 / 2.142072 (0.289095) | 0.588567 / 4.805227 (-4.216661) | 0.136070 / 6.500664 (-6.364594) | 0.063395 / 0.075469 (-0.012074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255003 / 1.841788 (-0.586784) | 20.587656 / 8.074308 (12.513348) | 15.147817 / 10.191392 (4.956425) | 0.152039 / 0.680424 (-0.528384) | 0.018815 / 0.534201 (-0.515386) | 0.397458 / 0.579283 (-0.181825) | 0.431433 / 0.434364 (-0.002931) | 0.487890 / 0.540337 (-0.052448) | 0.675367 / 1.386936 (-0.711569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007209 / 0.011353 (-0.004144) | 0.004372 / 0.011008 (-0.006636) | 0.066288 / 0.038508 (0.027780) | 0.091776 / 0.023109 (0.068667) | 0.390724 / 0.275898 (0.114826) | 0.434711 / 0.323480 (0.111231) | 0.005790 / 0.007986 (-0.002196) | 0.003562 / 0.004328 (-0.000767) | 0.066155 / 0.004250 (0.061904) | 0.062459 / 0.037052 (0.025406) | 0.406622 / 0.258489 (0.148133) | 0.433976 / 0.293841 (0.140135) | 0.032590 / 0.128546 (-0.095957) | 0.008856 / 0.075646 (-0.066790) | 0.072327 / 0.419271 (-0.346945) | 0.049958 / 0.043533 (0.006426) | 0.400164 / 0.255139 (0.145025) | 0.413339 / 0.283200 (0.130139) | 0.025283 / 0.141683 (-0.116399) | 1.487668 / 1.452155 (0.035514) | 1.537679 / 1.492716 (0.044962) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257814 / 0.018006 (0.239808) | 0.571741 / 0.000490 (0.571251) | 0.000412 / 0.000200 (0.000212) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033893 / 0.037411 (-0.003518) | 0.094533 / 0.014526 (0.080008) | 0.105876 / 0.176557 (-0.070680) | 0.158675 / 0.737135 (-0.578460) | 0.107790 / 0.296338 (-0.188548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425796 / 0.215209 (0.210587) | 4.229159 / 2.077655 (2.151505) | 2.239613 / 1.504120 (0.735493) | 2.073830 / 1.541195 (0.532635) | 2.185508 / 1.468490 (0.717018) | 0.483984 / 4.584777 (-4.100793) | 3.645575 / 3.745712 (-0.100137) | 3.454767 / 5.269862 (-1.815095) | 2.141387 / 4.565676 (-2.424290) | 0.057570 / 0.424275 (-0.366705) | 0.007901 / 0.007607 (0.000294) | 0.501160 / 0.226044 (0.275116) | 5.012283 / 2.268929 (2.743355) | 2.701267 / 55.444624 (-52.743357) | 2.465409 / 6.876477 (-4.411068) | 2.696812 / 2.142072 (0.554739) | 0.587160 / 4.805227 (-4.218067) | 0.134175 / 6.500664 (-6.366489) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345632 / 1.841788 (-0.496155) | 21.077279 / 8.074308 (13.002971) | 14.700826 / 10.191392 (4.509434) | 0.156191 / 0.680424 (-0.524233) | 0.018991 / 0.534201 (-0.515210) | 0.400413 / 0.579283 (-0.178870) | 0.420597 / 0.434364 (-0.013767) | 0.486534 / 0.540337 (-0.053804) | 0.646606 / 1.386936 (-0.740330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5bb8fabb135ca8adf47151ad3de050e3a258ccab \"CML watermark\")\n" ]
2023-07-28T09:37:15
2023-07-28T10:16:11
2023-07-28T10:07:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6091", "html_url": "https://github.com/huggingface/datasets/pull/6091", "diff_url": "https://github.com/huggingface/datasets/pull/6091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6091.patch", "merged_at": "2023-07-28T10:07:02" }
Fix https://github.com/huggingface/datasets/issues/6087 (Colab installs 2023.6.0, so we should be good)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6091/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6090/comments
https://api.github.com/repos/huggingface/datasets/issues/6090/events
https://github.com/huggingface/datasets/issues/6090
1,825,865,043
I_kwDODunzps5s1H1T
6,090
FilesIterable skips all the files after a hidden file
{ "login": "dkrivosic", "id": 10785413, "node_id": "MDQ6VXNlcjEwNzg1NDEz", "avatar_url": "https://avatars.githubusercontent.com/u/10785413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkrivosic", "html_url": "https://github.com/dkrivosic", "followers_url": "https://api.github.com/users/dkrivosic/followers", "following_url": "https://api.github.com/users/dkrivosic/following{/other_user}", "gists_url": "https://api.github.com/users/dkrivosic/gists{/gist_id}", "starred_url": "https://api.github.com/users/dkrivosic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkrivosic/subscriptions", "organizations_url": "https://api.github.com/users/dkrivosic/orgs", "repos_url": "https://api.github.com/users/dkrivosic/repos", "events_url": "https://api.github.com/users/dkrivosic/events{/privacy}", "received_events_url": "https://api.github.com/users/dkrivosic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting. We've merged a PR with a fix." ]
2023-07-28T07:25:57
2023-07-28T10:51:14
2023-07-28T10:50:11
NONE
null
null
null
### Describe the bug When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file. The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`. ### Steps to reproduce the bug https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8- ### Expected behavior The script should print all the files except the hidden one. ### Environment info - `datasets` version: 2.14.1 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6090/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6089/comments
https://api.github.com/repos/huggingface/datasets/issues/6089/events
https://github.com/huggingface/datasets/issues/6089
1,825,761,476
I_kwDODunzps5s0ujE
6,089
AssertionError: daemonic processes are not allowed to have children
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).", "> We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).\r\n\r\nGreat! Download takes more time than extract, multiple threads can download in parallel, which can speed up a lot." ]
2023-07-28T06:04:00
2023-07-31T02:34:02
null
NONE
null
null
null
### Describe the bug When I load_dataset with num_proc > 0 in a deamon process, I got an error: ```python File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract return self.extract(self.download(url_or_urls)) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download downloaded_path_or_paths = map_nested( ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map return _map_with_multiprocessing_pool( ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool with Pool(num_proc, initargs=initargs, initializer=initializer) as pool: ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__ self._repopulate_pool() ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static w.start() File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^ AssertionError: daemonic processes are not allowed to have children ``` The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process. ### Steps to reproduce the bug 1. start a deamon process 2. run load_dataset with num_proc > 0 ### Expected behavior No error. ### Environment info Python 3.11.4 datasets latest master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6089/timeline
null
null
false