url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1B
node_id
stringlengths
18
32
number
int64
1
2.96k
title
stringlengths
1
268
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,632B
updated_at
int64
1,587B
1,632B
closed_at
int64
1,587B
1,632B
author_association
stringclasses
4 values
active_lock_reason
null
pull_request
dict
body
stringlengths
0
228k
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2955/comments
https://api.github.com/repos/huggingface/datasets/issues/2955/events
https://github.com/huggingface/datasets/pull/2955
1,003,999,469
PR_kwDODunzps4sHuRu
2,955
Update legacy Python image for CI tests in Linux
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,632,299,127,000
1,632,302,608,000
null
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2955", "html_url": "https://github.com/huggingface/datasets/pull/2955", "diff_url": "https://github.com/huggingface/datasets/pull/2955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2955.patch" }
Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights: - Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host. - Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images. More info: https://circleci.com/docs/2.0/circleci-images
https://api.github.com/repos/huggingface/datasets/issues/2955/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2954/comments
https://api.github.com/repos/huggingface/datasets/issues/2954/events
https://github.com/huggingface/datasets/pull/2954
1,003,904,803
PR_kwDODunzps4sHa8O
2,954
Run tests in parallel
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```", "There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`" ]
1,632,294,044,000
1,632,297,373,000
null
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2954", "html_url": "https://github.com/huggingface/datasets/pull/2954", "diff_url": "https://github.com/huggingface/datasets/pull/2954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2954.patch" }
Run CI tests in parallel to speed up the test suite.
https://api.github.com/repos/huggingface/datasets/issues/2954/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2952/comments
https://api.github.com/repos/huggingface/datasets/issues/2952/events
https://github.com/huggingface/datasets/pull/2952
1,002,704,096
PR_kwDODunzps4sDU8S
2,952
Fix missing conda deps
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,632,237,781,000
1,632,285,599,000
1,632,238,244,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2952", "html_url": "https://github.com/huggingface/datasets/pull/2952", "diff_url": "https://github.com/huggingface/datasets/pull/2952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2952.patch" }
`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail. Fix #2932.
https://api.github.com/repos/huggingface/datasets/issues/2952/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2951/comments
https://api.github.com/repos/huggingface/datasets/issues/2951/events
https://github.com/huggingface/datasets/pull/2951
1,001,267,888
PR_kwDODunzps4r-lGs
2,951
Dummy labels no longer on by default in `to_tf_dataset`
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.", "Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features" ]
1,632,162,419,000
1,632,232,857,000
1,632,219,272,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2951", "html_url": "https://github.com/huggingface/datasets/pull/2951", "diff_url": "https://github.com/huggingface/datasets/pull/2951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2951.patch" }
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
https://api.github.com/repos/huggingface/datasets/issues/2951/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2950/comments
https://api.github.com/repos/huggingface/datasets/issues/2950/events
https://github.com/huggingface/datasets/pull/2950
1,001,085,353
PR_kwDODunzps4r-AKu
2,950
Fix fn kwargs in filter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,632,150,626,000
1,632,154,979,000
1,632,151,681,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2950", "html_url": "https://github.com/huggingface/datasets/pull/2950", "diff_url": "https://github.com/huggingface/datasets/pull/2950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2950.patch" }
#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927 I fixed that and added a test to make sure it doesn't happen again (for either map or filter) Fix #2927
https://api.github.com/repos/huggingface/datasets/issues/2950/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2949/comments
https://api.github.com/repos/huggingface/datasets/issues/2949/events
https://github.com/huggingface/datasets/pull/2949
1,001,026,680
PR_kwDODunzps4r90Pt
2,949
Introduce web and wiki config in triviaqa dataset
{ "login": "shirte", "id": 1706443, "node_id": "MDQ6VXNlcjE3MDY0NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1706443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shirte", "html_url": "https://github.com/shirte", "followers_url": "https://api.github.com/users/shirte/followers", "following_url": "https://api.github.com/users/shirte/following{/other_user}", "gists_url": "https://api.github.com/users/shirte/gists{/gist_id}", "starred_url": "https://api.github.com/users/shirte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shirte/subscriptions", "organizations_url": "https://api.github.com/users/shirte/orgs", "repos_url": "https://api.github.com/users/shirte/repos", "events_url": "https://api.github.com/users/shirte/events{/privacy}", "received_events_url": "https://api.github.com/users/shirte/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,632,147,443,000
1,632,262,631,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2949", "html_url": "https://github.com/huggingface/datasets/pull/2949", "diff_url": "https://github.com/huggingface/datasets/pull/2949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2949.patch" }
The TriviaQA paper suggests that the two subsets (Wikipedia and Web) should be treated differently. There are also different leaderboards for the two sets on CodaLab. For that reason, introduce additional builder configs in the trivia_qa dataset.
https://api.github.com/repos/huggingface/datasets/issues/2949/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2948/comments
https://api.github.com/repos/huggingface/datasets/issues/2948/events
https://github.com/huggingface/datasets/pull/2948
1,000,844,077
PR_kwDODunzps4r9PdV
2,948
Fix minor URL format in scitldr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,632,136,292,000
1,632,143,908,000
1,632,143,908,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2948", "html_url": "https://github.com/huggingface/datasets/pull/2948", "diff_url": "https://github.com/huggingface/datasets/pull/2948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2948.patch" }
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
https://api.github.com/repos/huggingface/datasets/issues/2948/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2947/comments
https://api.github.com/repos/huggingface/datasets/issues/2947/events
https://github.com/huggingface/datasets/pull/2947
1,000,798,338
PR_kwDODunzps4r9GIP
2,947
Don't use old, incompatible cache for the new `filter`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,632,133,139,000
1,632,155,109,000
1,632,145,382,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2947", "html_url": "https://github.com/huggingface/datasets/pull/2947", "diff_url": "https://github.com/huggingface/datasets/pull/2947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2947.patch" }
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result. To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0 This way the new `filter` outputs are now considered different from the old ones from the caching point of view. This should fix #2943 cc @anton-l
https://api.github.com/repos/huggingface/datasets/issues/2947/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2946/comments
https://api.github.com/repos/huggingface/datasets/issues/2946/events
https://github.com/huggingface/datasets/pull/2946
1,000,754,824
PR_kwDODunzps4r89f8
2,946
Update meteor score from nltk update
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,632,130,126,000
1,632,130,559,000
1,632,130,559,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2946", "html_url": "https://github.com/huggingface/datasets/pull/2946", "diff_url": "https://github.com/huggingface/datasets/pull/2946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2946.patch" }
It looks like there were issues in NLTK on the way the METEOR score was computed. A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values. I updated the score of the example in the docs
https://api.github.com/repos/huggingface/datasets/issues/2946/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2945/comments
https://api.github.com/repos/huggingface/datasets/issues/2945/events
https://github.com/huggingface/datasets/issues/2945
1,000,624,883
I_kwDODunzps47pFLz
2,945
Protect master branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Cool, I think we can do both :)", "@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history)." ]
1,632,120,421,000
1,632,139,287,000
1,632,139,216,000
MEMBER
null
null
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future: - [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch - Currently, simple merge commits are already disabled - I propose to disable rebase merging as well - ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~ - ~~This protection would reject direct pushes to master branch~~ - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~ - [x] Protect the master branch only from direct pushing of **merge commits** - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch). - No need to disable/re-enable this protection on each release This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
https://api.github.com/repos/huggingface/datasets/issues/2945/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2944/comments
https://api.github.com/repos/huggingface/datasets/issues/2944/events
https://github.com/huggingface/datasets/issues/2944
1,000,544,370
I_kwDODunzps47oxhy
2,944
Add `remove_columns` to `IterableDataset `
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,632,110,460,000
1,632,110,460,000
null
CONTRIBUTOR
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'IterableDataset' object has no attribute 'remove_columns' ``` **Describe the solution you'd like** It would be nice to have `.remove_columns()` to match the `Datasets` api. **Describe alternatives you've considered** This can be done with a single call to `.map()`, I can try to help add this. 🤗
https://api.github.com/repos/huggingface/datasets/issues/2944/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2943/comments
https://api.github.com/repos/huggingface/datasets/issues/2943/events
https://github.com/huggingface/datasets/issues/2943
1,000,355,115
I_kwDODunzps47oDUr
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?", "If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.", "Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR", "I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available", "Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !", "Fixed by #2947." ]
1,632,068,197,000
1,632,155,143,000
1,632,155,142,000
CONTRIBUTOR
null
null
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}` Related feature: https://github.com/huggingface/datasets/pull/2836 :question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) ## Workaround Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`. ## Steps to reproduce the bug 1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists. 2. `pip install datasets==1.11.0` and run the following snippet: ```python from datasets import load_dataset ids = ["1272-141231-0000"] ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.filter(lambda x: x["id"] in ids) ``` 3. `pip install datasets==1.12.1` and re-run the code again ## Expected results Same result as with the previous `datasets` version. ## Actual results ```bash Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1) Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow Traceback (most recent call last): File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module> ds = ds.filter(lambda x: x["id"] in ids) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter indices = self.map( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map return self._map_single( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file return cls( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as return Features(recursive_reorder(self, other)) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)} Process finished with exit code 1 ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 5.0.0
https://api.github.com/repos/huggingface/datasets/issues/2943/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2942/comments
https://api.github.com/repos/huggingface/datasets/issues/2942/events
https://github.com/huggingface/datasets/pull/2942
1,000,309,765
PR_kwDODunzps4r7tY6
2,942
Add SEDE dataset
{ "login": "Hazoom", "id": 13545154, "node_id": "MDQ6VXNlcjEzNTQ1MTU0", "avatar_url": "https://avatars.githubusercontent.com/u/13545154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hazoom", "html_url": "https://github.com/Hazoom", "followers_url": "https://api.github.com/users/Hazoom/followers", "following_url": "https://api.github.com/users/Hazoom/following{/other_user}", "gists_url": "https://api.github.com/users/Hazoom/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hazoom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hazoom/subscriptions", "organizations_url": "https://api.github.com/users/Hazoom/orgs", "repos_url": "https://api.github.com/users/Hazoom/repos", "events_url": "https://api.github.com/users/Hazoom/events{/privacy}", "received_events_url": "https://api.github.com/users/Hazoom/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.", "Hi @Hazoom,\r\n\r\nYou were right: the non-passing test had nothing to do with this PR.\r\n\r\nUnfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n- your commits repeated two times\r\n- and commits which are not yours from the master branch\r\n\r\nIf you would like to clean your pull request, please make:\r\n```\r\ngit reset --hard 587b93a\r\ngit fetch upstream master\r\ngit merge upstream/master\r\ngit push --force origin sede\r\n```", "> Hi @Hazoom,\r\n> \r\n> You were right: the non-passing test had nothing to do with this PR.\r\n> \r\n> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n> \r\n> * your commits repeated two times\r\n> * and commits which are not yours from the master branch\r\n> \r\n> If you would like to clean your pull request, please make:\r\n> \r\n> ```\r\n> git reset --hard 587b93a\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> git push --force origin sede\r\n> ```\r\n\r\nThanks @albertvillanova ", "> Nice! Just one final request before approving your pull request:\r\n> \r\n> As you have updated the \"QuerySetId\" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:\r\n> \r\n> ```\r\n> rm datasets/sede/dataset_infos.json\r\n> datasets-cli test datasets/sede --save_infos --all_configs\r\n> ```\r\n\r\n@albertvillanova Good catch, just fixed it." ]
1,632,057,084,000
1,632,139,643,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2942", "html_url": "https://github.com/huggingface/datasets/pull/2942", "diff_url": "https://github.com/huggingface/datasets/pull/2942.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2942.patch" }
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
https://api.github.com/repos/huggingface/datasets/issues/2942/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2941/comments
https://api.github.com/repos/huggingface/datasets/issues/2941/events
https://github.com/huggingface/datasets/issues/2941
1,000,000,711
I_kwDODunzps47mszH
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I tried `unshuffled_original_da` and it is also not working" ]
1,631,961,553,000
1,631,982,333,000
null
NONE
null
null
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}] ``` ## Expected results Loading is successful. ## Actual results Loading throws above error. ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
https://api.github.com/repos/huggingface/datasets/issues/2941/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2940/comments
https://api.github.com/repos/huggingface/datasets/issues/2940/events
https://github.com/huggingface/datasets/pull/2940
999,680,796
PR_kwDODunzps4r6EUF
2,940
add swedish_medical_ner dataset
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,631,908,985,000
1,632,216,774,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2940", "html_url": "https://github.com/huggingface/datasets/pull/2940", "diff_url": "https://github.com/huggingface/datasets/pull/2940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2940.patch" }
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
https://api.github.com/repos/huggingface/datasets/issues/2940/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2939/comments
https://api.github.com/repos/huggingface/datasets/issues/2939/events
https://github.com/huggingface/datasets/pull/2939
999,639,630
PR_kwDODunzps4r58Gu
2,939
MENYO-20k repo has moved, updating URL
{ "login": "cdleong", "id": 4109253, "node_id": "MDQ6VXNlcjQxMDkyNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdleong", "html_url": "https://github.com/cdleong", "followers_url": "https://api.github.com/users/cdleong/followers", "following_url": "https://api.github.com/users/cdleong/following{/other_user}", "gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdleong/subscriptions", "organizations_url": "https://api.github.com/users/cdleong/orgs", "repos_url": "https://api.github.com/users/cdleong/repos", "events_url": "https://api.github.com/users/cdleong/events{/privacy}", "received_events_url": "https://api.github.com/users/cdleong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,905,314,000
1,632,238,297,000
1,632,238,296,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2939", "html_url": "https://github.com/huggingface/datasets/pull/2939", "diff_url": "https://github.com/huggingface/datasets/pull/2939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2939.patch" }
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
https://api.github.com/repos/huggingface/datasets/issues/2939/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2938/comments
https://api.github.com/repos/huggingface/datasets/issues/2938/events
https://github.com/huggingface/datasets/pull/2938
999,552,263
PR_kwDODunzps4r5qwa
2,938
Take namespace into account in caching
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories", "IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\ncc @Pierrci ", "> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\nout of curiosity: where is it enforced?", "> where is it enforced?\r\n\r\nNowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future", "Thanks for the trick, I'm doing the change :)\r\nWe can use\r\n`~/.cache/huggingface/datasets/username___dataset_name` for the data\r\n`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files" ]
1,631,897,853,000
1,632,242,634,000
null
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2938", "html_url": "https://github.com/huggingface/datasets/pull/2938", "diff_url": "https://github.com/huggingface/datasets/pull/2938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2938.patch" }
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used: <s> `~/.cache/huggingface/datasets/username/dataset_name` for the data `~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files </s> EDIT: actually using three underscores: `~/.cache/huggingface/datasets/username___dataset_name` for the data `~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files This PR should fix the issue https://github.com/huggingface/datasets/issues/2842 cc @stas00
https://api.github.com/repos/huggingface/datasets/issues/2938/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2937/comments
https://api.github.com/repos/huggingface/datasets/issues/2937/events
https://github.com/huggingface/datasets/issues/2937
999,548,277
I_kwDODunzps47k-V1
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
{ "login": "daqieq", "id": 40532020, "node_id": "MDQ6VXNlcjQwNTMyMDIw", "avatar_url": "https://avatars.githubusercontent.com/u/40532020?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daqieq", "html_url": "https://github.com/daqieq", "followers_url": "https://api.github.com/users/daqieq/followers", "following_url": "https://api.github.com/users/daqieq/following{/other_user}", "gists_url": "https://api.github.com/users/daqieq/gists{/gist_id}", "starred_url": "https://api.github.com/users/daqieq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daqieq/subscriptions", "organizations_url": "https://api.github.com/users/daqieq/orgs", "repos_url": "https://api.github.com/users/daqieq/repos", "events_url": "https://api.github.com/users/daqieq/events{/privacy}", "received_events_url": "https://api.github.com/users/daqieq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\\r\n1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nDownloading: 334MB [01:17, 4.32MB/s]\r\nDataset wiki_bio downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi\r\ns data.\r\n```\r\n\r\nThis kind of error messages usually happen because:\r\n- Your running Python script hasn't write access to that directory\r\n- You have another program (the File Explorer?) already browsing inside that directory", "Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.\r\n\r\nRunning on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.\r\n\r\nThat leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).\r\n\r\nIf it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue." ]
1,631,897,530,000
1,632,189,875,000
null
NONE
null
null
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any errors. ## Actual results PermissionError see trace below: ``` Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare self._save_info() File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__ next(self.gen) File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir os.rename(tmp_dir, dirname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9' ``` By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed. It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue. ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.22449-SP0 - Python version: 3.8.12 - PyArrow version: 5.0.0
https://api.github.com/repos/huggingface/datasets/issues/2937/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2936/comments
https://api.github.com/repos/huggingface/datasets/issues/2936/events
https://github.com/huggingface/datasets/pull/2936
999,521,647
PR_kwDODunzps4r5knb
2,936
Check that array is not Float as nan != nan
{ "login": "Iwontbecreative", "id": 494951, "node_id": "MDQ6VXNlcjQ5NDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iwontbecreative", "html_url": "https://github.com/Iwontbecreative", "followers_url": "https://api.github.com/users/Iwontbecreative/followers", "following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}", "gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions", "organizations_url": "https://api.github.com/users/Iwontbecreative/orgs", "repos_url": "https://api.github.com/users/Iwontbecreative/repos", "events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}", "received_events_url": "https://api.github.com/users/Iwontbecreative/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,895,401,000
1,632,217,145,000
1,632,217,144,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2936", "html_url": "https://github.com/huggingface/datasets/pull/2936", "diff_url": "https://github.com/huggingface/datasets/pull/2936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2936.patch" }
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
https://api.github.com/repos/huggingface/datasets/issues/2936/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2935/comments
https://api.github.com/repos/huggingface/datasets/issues/2935/events
https://github.com/huggingface/datasets/pull/2935
999,518,469
PR_kwDODunzps4r5j8B
2,935
Add Jigsaw unintended Bias
{ "login": "Iwontbecreative", "id": 494951, "node_id": "MDQ6VXNlcjQ5NDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iwontbecreative", "html_url": "https://github.com/Iwontbecreative", "followers_url": "https://api.github.com/users/Iwontbecreative/followers", "following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}", "gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions", "organizations_url": "https://api.github.com/users/Iwontbecreative/orgs", "repos_url": "https://api.github.com/users/Iwontbecreative/repos", "events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}", "received_events_url": "https://api.github.com/users/Iwontbecreative/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix", "@lhoestq implemented your changes, I think this might be ready for another look." ]
1,631,895,151,000
1,632,269,548,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2935", "html_url": "https://github.com/huggingface/datasets/pull/2935", "diff_url": "https://github.com/huggingface/datasets/pull/2935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2935.patch" }
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
https://api.github.com/repos/huggingface/datasets/issues/2935/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2934/comments
https://api.github.com/repos/huggingface/datasets/issues/2934/events
https://github.com/huggingface/datasets/issues/2934
999,477,413
I_kwDODunzps47ktCl
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!", "Thanks a lot for investigating !" ]
1,631,892,413,000
1,632,155,004,000
null
MEMBER
null
null
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one reference left" ``` This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards. Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this. cc @Rocketknight1
https://api.github.com/repos/huggingface/datasets/issues/2934/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2933/comments
https://api.github.com/repos/huggingface/datasets/issues/2933/events
https://github.com/huggingface/datasets/pull/2933
999,392,566
PR_kwDODunzps4r5MHs
2,933
Replace script_version with revision
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm also fine with the removal in 1.15" ]
1,631,887,479,000
1,632,131,530,000
1,632,131,530,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2933", "html_url": "https://github.com/huggingface/datasets/pull/2933", "diff_url": "https://github.com/huggingface/datasets/pull/2933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2933.patch" }
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are also aligned with: - Transformers: `AutoTokenizer.from_pretrained(..., revision=...)` - Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)`
https://api.github.com/repos/huggingface/datasets/issues/2933/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2932/comments
https://api.github.com/repos/huggingface/datasets/issues/2932/events
https://github.com/huggingface/datasets/issues/2932
999,317,750
I_kwDODunzps47kGD2
2,932
Conda build fails
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11", "Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 " ]
1,631,882,962,000
1,632,238,270,000
1,632,238,270,000
MEMBER
null
null
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
https://api.github.com/repos/huggingface/datasets/issues/2932/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2931/comments
https://api.github.com/repos/huggingface/datasets/issues/2931/events
https://github.com/huggingface/datasets/pull/2931
998,326,359
PR_kwDODunzps4r1-JH
2,931
Fix bug in to_tf_dataset
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!" ]
1,631,804,883,000
1,631,811,698,000
1,631,811,697,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2931", "html_url": "https://github.com/huggingface/datasets/pull/2931", "diff_url": "https://github.com/huggingface/datasets/pull/2931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2931.patch" }
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
https://api.github.com/repos/huggingface/datasets/issues/2931/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2930/comments
https://api.github.com/repos/huggingface/datasets/issues/2930/events
https://github.com/huggingface/datasets/issues/2930
998,154,311
I_kwDODunzps47fqBH
2,930
Mutable columns argument breaks set_format
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Pushed a fix to my branch #2731 " ]
1,631,795,242,000
1,631,800,253,000
1,631,800,253,000
CONTRIBUTOR
null
null
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] dataset.set_format("python", columns=column_list) column_list[1] = "foo" # Change the list after we call `set_format` dataset['train'][:4].keys() ``` ## Expected results ```python dict_keys(['idx', 'label']) ``` ## Actual results ```python dict_keys(['idx']) ```
https://api.github.com/repos/huggingface/datasets/issues/2930/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2929/comments
https://api.github.com/repos/huggingface/datasets/issues/2929/events
https://github.com/huggingface/datasets/pull/2929
997,960,024
PR_kwDODunzps4r015C
2,929
Add regression test for null Sequence
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,782,713,000
1,631,867,039,000
1,631,867,039,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2929", "html_url": "https://github.com/huggingface/datasets/pull/2929", "diff_url": "https://github.com/huggingface/datasets/pull/2929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2929.patch" }
Relates to #2892 and #2900.
https://api.github.com/repos/huggingface/datasets/issues/2929/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2928/comments
https://api.github.com/repos/huggingface/datasets/issues/2928/events
https://github.com/huggingface/datasets/pull/2928
997,941,506
PR_kwDODunzps4r0yUb
2,928
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,781,560,000
1,631,795,734,000
1,631,795,734,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2928", "html_url": "https://github.com/huggingface/datasets/pull/2928", "diff_url": "https://github.com/huggingface/datasets/pull/2928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2928.patch" }
Update BibTeX entry.
https://api.github.com/repos/huggingface/datasets/issues/2928/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2927/comments
https://api.github.com/repos/huggingface/datasets/issues/2927/events
https://github.com/huggingface/datasets/issues/2927
997,654,680
I_kwDODunzps47dwCY
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, I'm looking into it :)", "Fixed by #2950." ]
1,631,754,842,000
1,632,155,002,000
1,632,155,001,000
NONE
null
null
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[str], tokenizer: PreTrainedTokenizerFast, ) -> bool: """Get the good rows""" encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer) ex["encoding"] = encoding for relation in ex["relations"]: if not is_valid_relation(relation, valid_rel_labels): return False for span in ex["spans"]: if not is_valid_span(span, valid_ner_labels, encoding): return False return True def get_dataset(): loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name="prodigy-dataset", data_files=sorted(file_paths), cache_dir=cache_dir, )["train"] valid_ner_labels = set(vocab.ner_category) valid_relations = set(vocab.relation_types.keys()) ds = ds.filter( filter_good_rows, fn_kwargs=dict( valid_rel_labels=valid_relations, valid_ner_labels=valid_ner_labels, tokenizer=vocab.tokenizer, ), keep_in_memory=True, num_proc=num_proc, ) ``` `ds` is a `DatasetDict` produced by a jsonl dataset. This runs fine on 1.11 but fails on 1.12 **Stack Trace** ## Expected results I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11 ## Actual results ``` tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl ds = ds.filter( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter indices = self.map( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map return self._map_single( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single batch = apply_function_on_filtered_inputs( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...} indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0 def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0): """Utility to apply the function on a selection of columns.""" nonlocal update_data fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] if offset == 0: effective_indices = indices else: effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset processed_inputs = ( > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) ) E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels' ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Mac - Python version: 3.8.9 - PyArrow version: pyarrow==5.0.0
https://api.github.com/repos/huggingface/datasets/issues/2927/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2926/comments
https://api.github.com/repos/huggingface/datasets/issues/2926/events
https://github.com/huggingface/datasets/issues/2926
997,463,277
I_kwDODunzps47dBTt
2,926
Error when downloading datasets to non-traditional cache directories
{ "login": "dar-tau", "id": 45885627, "node_id": "MDQ6VXNlcjQ1ODg1NjI3", "avatar_url": "https://avatars.githubusercontent.com/u/45885627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dar-tau", "html_url": "https://github.com/dar-tau", "followers_url": "https://api.github.com/users/dar-tau/followers", "following_url": "https://api.github.com/users/dar-tau/following{/other_user}", "gists_url": "https://api.github.com/users/dar-tau/gists{/gist_id}", "starred_url": "https://api.github.com/users/dar-tau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dar-tau/subscriptions", "organizations_url": "https://api.github.com/users/dar-tau/orgs", "repos_url": "https://api.github.com/users/dar-tau/repos", "events_url": "https://api.github.com/users/dar-tau/events{/privacy}", "received_events_url": "https://api.github.com/users/dar-tau/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,631,735,986,000
1,631,736,135,000
null
NONE
null
null
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual results ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.1.2 - Platform: Ubuntu - Python version: 3.8 ## Extra notes Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction: - With `cache_dir="/path/to/netapp/.cache"` the same thing happens. - However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work - On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore. While I could test it only for a NetApp device, it might have to do with any other mounted FS. Thanks :)
https://api.github.com/repos/huggingface/datasets/issues/2926/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2925/comments
https://api.github.com/repos/huggingface/datasets/issues/2925/events
https://github.com/huggingface/datasets/pull/2925
997,407,034
PR_kwDODunzps4rzJ9s
2,925
Add tutorial for no-code dataset upload
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```", "Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet" ]
1,631,732,082,000
1,632,248,034,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2925", "html_url": "https://github.com/huggingface/datasets/pull/2925", "diff_url": "https://github.com/huggingface/datasets/pull/2925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2925.patch" }
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
https://api.github.com/repos/huggingface/datasets/issues/2925/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2924/comments
https://api.github.com/repos/huggingface/datasets/issues/2924/events
https://github.com/huggingface/datasets/issues/2924
997,378,113
I_kwDODunzps47cshB
2,924
"File name too long" error for file locks
{ "login": "gar1t", "id": 184949, "node_id": "MDQ6VXNlcjE4NDk0OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/184949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gar1t", "html_url": "https://github.com/gar1t", "followers_url": "https://api.github.com/users/gar1t/followers", "following_url": "https://api.github.com/users/gar1t/following{/other_user}", "gists_url": "https://api.github.com/users/gar1t/gists{/gist_id}", "starred_url": "https://api.github.com/users/gar1t/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gar1t/subscriptions", "organizations_url": "https://api.github.com/users/gar1t/orgs", "repos_url": "https://api.github.com/users/gar1t/repos", "events_url": "https://api.github.com/users/gar1t/events{/privacy}", "received_events_url": "https://api.github.com/users/gar1t/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135", "Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info." ]
1,631,729,810,000
1,632,232,993,000
null
NONE
null
null
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Steps to reproduce the bug Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4): ```python from datasets import load_dataset load_dataset("gar1t/test") ``` ## Expected results Expect the function to return without an error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare self._save_info() File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info with FileLock(lock_path): File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
https://api.github.com/repos/huggingface/datasets/issues/2924/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2923/comments
https://api.github.com/repos/huggingface/datasets/issues/2923/events
https://github.com/huggingface/datasets/issues/2923
997,351,590
I_kwDODunzps47cmCm
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,631,727,878,000
1,631,727,878,000
null
CONTRIBUTOR
null
null
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an error load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True) ## does not raise an error ``` ## Expected results Both calls should raise the same error ## Actual results Call with streaming=False: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "splits" does not exist in table schema' ``` Call with `streaming=False`: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s] ``` ## Environment info - `datasets` version: 1.12.1.dev0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
https://api.github.com/repos/huggingface/datasets/issues/2923/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2922/comments
https://api.github.com/repos/huggingface/datasets/issues/2922/events
https://github.com/huggingface/datasets/pull/2922
997,332,662
PR_kwDODunzps4ry6-s
2,922
Fix conversion of multidim arrays in list to arrow
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,726,496,000
1,631,726,572,000
1,631,726,505,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2922", "html_url": "https://github.com/huggingface/datasets/pull/2922", "diff_url": "https://github.com/huggingface/datasets/pull/2922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2922.patch" }
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays. In this PR I added two strategies: - one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case) - one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed. Fix https://github.com/huggingface/datasets/issues/2921
https://api.github.com/repos/huggingface/datasets/issues/2922/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2921/comments
https://api.github.com/repos/huggingface/datasets/issues/2921/events
https://github.com/huggingface/datasets/issues/2921
997,325,424
I_kwDODunzps47cfpw
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,725,931,000
1,631,726,505,000
1,631,726,505,000
MEMBER
null
null
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <module> d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch") File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict pa_table = InMemoryTable.from_pydict(mapping=mapping) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict return cls(pa.Table.from_pydict(*args, **kwargs)) File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 223, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__ out = pa.array(self.data, type=type) File "pyarrow/array.pxi", line 306, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
https://api.github.com/repos/huggingface/datasets/issues/2921/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2920/comments
https://api.github.com/repos/huggingface/datasets/issues/2920/events
https://github.com/huggingface/datasets/pull/2920
997,323,014
PR_kwDODunzps4ry4_u
2,920
Fix unwanted tqdm bar when accessing examples
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,725,751,000
1,631,726,304,000
1,631,726,304,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2920", "html_url": "https://github.com/huggingface/datasets/pull/2920", "diff_url": "https://github.com/huggingface/datasets/pull/2920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2920.patch" }
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
https://api.github.com/repos/huggingface/datasets/issues/2920/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2919/comments
https://api.github.com/repos/huggingface/datasets/issues/2919/events
https://github.com/huggingface/datasets/issues/2919
997,127,487
I_kwDODunzps47bvU_
2,919
Unwanted progress bars when accessing examples
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "doing a patch release now :)" ]
1,631,714,710,000
1,631,726,509,000
1,631,726,303,000
MEMBER
null
null
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") In [3]: d[0] 100%|████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s] Out[3]: {'a': tensor(0)} ``` This is because the pytorch formatter calls `map_nested` that uses progress bars cc @sgugger
https://api.github.com/repos/huggingface/datasets/issues/2919/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2918/comments
https://api.github.com/repos/huggingface/datasets/issues/2918/events
https://github.com/huggingface/datasets/issues/2918
997,063,347
I_kwDODunzps47bfqz
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...", "Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```", "Thanks for investigating @albertvillanova ! 🤗 " ]
1,631,711,167,000
1,632,127,898,000
null
CONTRIBUTOR
null
null
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_dataset iter_dset = iter( load_dataset("scitldr", name="FullText", split="test", streaming=True) ) next(iter_dset) ``` ## Expected results Returns the first sample of the dataset ## Actual results Calling `__next__` crashes with the following Traceback: ```python ----> 1 next(dset_iter) ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 339 340 def __iter__(self): --> 341 for key, example in self._iter(): 342 if self.features: 343 # we encode the example for ClassLabel feature types for example ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self) 336 else: 337 ex_iterable = self._ex_iterable --> 338 yield from ex_iterable 339 340 def __iter__(self): ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split) 162 163 with open(filepath, encoding="utf-8") as f: --> 164 for id_, row in enumerate(f): 165 data = json.loads(row) 166 if self.config.name == "AIC": ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length) 496 else: 497 length = min(self.size - self.loc, length) --> 498 return super().read(length) 499 500 async def async_fetch_all(self): ~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length) 1481 # don't even bother calling fetch 1482 return b"" -> 1483 out = self.cache._fetch(self.loc, self.loc + length) 1484 self.loc += len(out) 1485 return out ~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end) 378 elif start < self.start: 379 if self.end - end > self.blocksize: --> 380 self.cache = self.fetcher(start, bend) 381 self.start = start 382 else: ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs) 86 def wrapper(*args, **kwargs): 87 self = obj or args[0] ---> 88 return sync(self.loop, func, *args, **kwargs) 89 90 return wrapper ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs) 67 raise FSTimeoutError 68 if isinstance(result[0], BaseException): ---> 69 raise result[0] 70 return result[0] 71 ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout) 23 coro = asyncio.wait_for(coro, timeout=timeout) 24 try: ---> 25 result[0] = await coro 26 except Exception as ex: 27 result[0] = ex ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end) 538 if r.status == 206: 539 # partial content, as expected --> 540 out = await r.read() 541 elif "Content-Length" in r.headers: 542 cl = int(r.headers["Content-Length"]) ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self) 1030 if self._body is None: 1031 try: -> 1032 self._body = await self.content.read() 1033 for trace in self._traces: 1034 await trace.send_response_chunk_received( ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n) 342 async def read(self, n: int = -1) -> bytes: 343 if self._exception is not None: --> 344 raise self._exception 345 346 # migration problem; with DataQueue you have to catch ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyArrow version: 2.0.0 - aiohttp version: 3.7.4.post0
https://api.github.com/repos/huggingface/datasets/issues/2918/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2917/comments
https://api.github.com/repos/huggingface/datasets/issues/2917/events
https://github.com/huggingface/datasets/issues/2917
997,041,658
I_kwDODunzps47baX6
2,917
windows download abnormal
{ "login": "wei1826676931", "id": 52347799, "node_id": "MDQ6VXNlcjUyMzQ3Nzk5", "avatar_url": "https://avatars.githubusercontent.com/u/52347799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wei1826676931", "html_url": "https://github.com/wei1826676931", "followers_url": "https://api.github.com/users/wei1826676931/followers", "following_url": "https://api.github.com/users/wei1826676931/following{/other_user}", "gists_url": "https://api.github.com/users/wei1826676931/gists{/gist_id}", "starred_url": "https://api.github.com/users/wei1826676931/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wei1826676931/subscriptions", "organizations_url": "https://api.github.com/users/wei1826676931/orgs", "repos_url": "https://api.github.com/users/wei1826676931/repos", "events_url": "https://api.github.com/users/wei1826676931/events{/privacy}", "received_events_url": "https://api.github.com/users/wei1826676931/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used", "It is indeed an agency problem, thank you very, very much", "Let me know if you have other questions :)\r\n\r\nClosing this issue now" ]
1,631,709,935,000
1,631,812,668,000
1,631,812,668,000
NONE
null
null
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png) # Sample code to reproduce the bug ``` ## Expected results It can be downloaded normally. ## Actual results it cann't ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.11.0 - Platform:windows - Python version:3.7 - PyArrow version:
https://api.github.com/repos/huggingface/datasets/issues/2917/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2916/comments
https://api.github.com/repos/huggingface/datasets/issues/2916/events
https://github.com/huggingface/datasets/pull/2916
997,003,661
PR_kwDODunzps4rx5ua
2,916
Add OpenAI's pass@k code evaluation metric
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)\r\nYou can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions\r\n\r\nLet me know if you have questions or if I can help", "Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages." ]
1,631,707,543,000
1,631,951,964,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2916", "html_url": "https://github.com/huggingface/datasets/pull/2916", "diff_url": "https://github.com/huggingface/datasets/pull/2916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2916.patch" }
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention. The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893. A few open questions: - The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`? - This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message? - Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
https://api.github.com/repos/huggingface/datasets/issues/2916/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2915/comments
https://api.github.com/repos/huggingface/datasets/issues/2915/events
https://github.com/huggingface/datasets/pull/2915
996,870,071
PR_kwDODunzps4rxfWb
2,915
Fix fsspec AbstractFileSystem access
{ "login": "pierre-godard", "id": 3969168, "node_id": "MDQ6VXNlcjM5NjkxNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierre-godard", "html_url": "https://github.com/pierre-godard", "followers_url": "https://api.github.com/users/pierre-godard/followers", "following_url": "https://api.github.com/users/pierre-godard/following{/other_user}", "gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions", "organizations_url": "https://api.github.com/users/pierre-godard/orgs", "repos_url": "https://api.github.com/users/pierre-godard/repos", "events_url": "https://api.github.com/users/pierre-godard/events{/privacy}", "received_events_url": "https://api.github.com/users/pierre-godard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,698,760,000
1,631,705,724,000
1,631,705,724,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2915", "html_url": "https://github.com/huggingface/datasets/pull/2915", "diff_url": "https://github.com/huggingface/datasets/pull/2915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2915.patch" }
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
https://api.github.com/repos/huggingface/datasets/issues/2915/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2914/comments
https://api.github.com/repos/huggingface/datasets/issues/2914/events
https://github.com/huggingface/datasets/issues/2914
996,770,168
I_kwDODunzps47aYF4
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
{ "login": "pierre-godard", "id": 3969168, "node_id": "MDQ6VXNlcjM5NjkxNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierre-godard", "html_url": "https://github.com/pierre-godard", "followers_url": "https://api.github.com/users/pierre-godard/followers", "following_url": "https://api.github.com/users/pierre-godard/following{/other_user}", "gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions", "organizations_url": "https://api.github.com/users/pierre-godard/orgs", "repos_url": "https://api.github.com/users/pierre-godard/repos", "events_url": "https://api.github.com/users/pierre-godard/events{/privacy}", "received_events_url": "https://api.github.com/users/pierre-godard/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Closed by #2915." ]
1,631,692,446,000
1,631,724,557,000
1,631,724,556,000
CONTRIBUTOR
null
null
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesystem_spec/blob/0589358d8a029ed6b60d031018f52be2eb721291/fsspec/__init__.py#L55)). So that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable. This make the import of datasets failing as it is using that `fsspec.spec`. ## Steps to reproduce the bug I could reproduce the bug with a dummy poetry project. Here is the pyproject.toml: ```toml [tool.poetry] name = "debug-datasets" version = "0.1.0" description = "" authors = ["Pierre Godard"] [tool.poetry.dependencies] python = "^3.8" datasets = "^1.11.0" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.poetry.plugins."fsspec.specs"] "file2" = "fsspec.implementations.local.LocalFileSystem" ``` The only other file being a `debug_datasets/__init__.py` empty file. The overall structure of the project is as follows: ``` . ├── pyproject.toml └── debug_datasets └── __init__.py ``` Then, within the project folder run: ``` poetry install poetry run python ``` And in the python interpreter, try to import `datasets`: ``` import datasets ``` ## Expected results The import should run successfully. ## Actual results Here is the trace of the error I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .filesystems import extract_path_from_uri, is_remote_filesystem File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 30, in <module> def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: AttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem' ``` ## Suggested fix `datasets/filesystems/__init__.py`, line 30, replace: ``` def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: ``` by: ``` def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool: ``` I will come up with a PR soon if this effectively solves the issue. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: WSL2 (Ubuntu 20.04.1 LTS) - Python version: 3.8.5 - PyArrow version: 5.0.0 - `fsspec` version: 2021.8.1
https://api.github.com/repos/huggingface/datasets/issues/2914/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2913/comments
https://api.github.com/repos/huggingface/datasets/issues/2913/events
https://github.com/huggingface/datasets/issues/2913
996,436,368
I_kwDODunzps47ZGmQ
2,913
timit_asr dataset only includes one text phrase
{ "login": "margotwagner", "id": 39107794, "node_id": "MDQ6VXNlcjM5MTA3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/39107794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/margotwagner", "html_url": "https://github.com/margotwagner", "followers_url": "https://api.github.com/users/margotwagner/followers", "following_url": "https://api.github.com/users/margotwagner/following{/other_user}", "gists_url": "https://api.github.com/users/margotwagner/gists{/gist_id}", "starred_url": "https://api.github.com/users/margotwagner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/margotwagner/subscriptions", "organizations_url": "https://api.github.com/users/margotwagner/orgs", "repos_url": "https://api.github.com/users/margotwagner/repos", "events_url": "https://api.github.com/users/margotwagner/events{/privacy}", "received_events_url": "https://api.github.com/users/margotwagner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)", "Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `datasets` version: 1.4.1" ]
1,631,653,567,000
1,631,693,119,000
1,631,693,118,000
NONE
null
null
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english 1. Install the dataset and other packages ```python !pip install datasets>=1.5.0 !pip install transformers==4.4.0 !pip install soundfile !pip install jiwer ``` 2. Load the dataset ```python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") ``` 3. Remove columns that we don't want ```python timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]) ``` 4. Write a short function to display some random samples of the dataset. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit["train"].remove_columns(["file"])) ``` ## Expected results 10 random different transcription phrases. ## Actual results 10 of the same transcription phrase "Would such an act of refusal be useful?" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: not listed
https://api.github.com/repos/huggingface/datasets/issues/2913/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2912/comments
https://api.github.com/repos/huggingface/datasets/issues/2912/events
https://github.com/huggingface/datasets/pull/2912
996,256,005
PR_kwDODunzps4rvhgp
2,912
Update link to Blog in docs footer
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,640,194,000
1,631,692,763,000
1,631,692,763,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2912", "html_url": "https://github.com/huggingface/datasets/pull/2912", "diff_url": "https://github.com/huggingface/datasets/pull/2912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2912.patch" }
Update link.
https://api.github.com/repos/huggingface/datasets/issues/2912/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2911/comments
https://api.github.com/repos/huggingface/datasets/issues/2911/events
https://github.com/huggingface/datasets/pull/2911
996,202,598
PR_kwDODunzps4rvW7Y
2,911
Fix exception chaining
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,636,369,000
1,631,804,684,000
1,631,804,684,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2911", "html_url": "https://github.com/huggingface/datasets/pull/2911", "diff_url": "https://github.com/huggingface/datasets/pull/2911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2911.patch" }
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
https://api.github.com/repos/huggingface/datasets/issues/2911/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2910/comments
https://api.github.com/repos/huggingface/datasets/issues/2910/events
https://github.com/huggingface/datasets/pull/2910
996,149,632
PR_kwDODunzps4rvL9N
2,910
feat: 🎸 pass additional arguments to get private configs + info
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Included in https://github.com/huggingface/datasets/pull/2906" ]
1,631,633,059,000
1,631,722,749,000
1,631,722,746,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2910", "html_url": "https://github.com/huggingface/datasets/pull/2910", "diff_url": "https://github.com/huggingface/datasets/pull/2910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2910.patch" }
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
https://api.github.com/repos/huggingface/datasets/issues/2910/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2909/comments
https://api.github.com/repos/huggingface/datasets/issues/2909/events
https://github.com/huggingface/datasets/pull/2909
996,002,180
PR_kwDODunzps4rutdo
2,909
fix anli splits
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,631,625,035,000
1,631,625,035,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2909", "html_url": "https://github.com/huggingface/datasets/pull/2909", "diff_url": "https://github.com/huggingface/datasets/pull/2909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2909.patch" }
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
https://api.github.com/repos/huggingface/datasets/issues/2909/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2908/comments
https://api.github.com/repos/huggingface/datasets/issues/2908/events
https://github.com/huggingface/datasets/pull/2908
995,970,612
PR_kwDODunzps4rumwW
2,908
Update Zenodo metadata with creator names and affiliation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,623,177,000
1,631,629,765,000
1,631,629,765,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2908", "html_url": "https://github.com/huggingface/datasets/pull/2908", "diff_url": "https://github.com/huggingface/datasets/pull/2908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2908.patch" }
This PR helps in prefilling author data when automatically generating the DOI after each release.
https://api.github.com/repos/huggingface/datasets/issues/2908/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2907/comments
https://api.github.com/repos/huggingface/datasets/issues/2907/events
https://github.com/huggingface/datasets/pull/2907
995,968,152
PR_kwDODunzps4rumOy
2,907
add story_cloze dataset
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,631,623,013,000
1,631,623,013,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2907", "html_url": "https://github.com/huggingface/datasets/pull/2907", "diff_url": "https://github.com/huggingface/datasets/pull/2907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2907.patch" }
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
https://api.github.com/repos/huggingface/datasets/issues/2907/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2906/comments
https://api.github.com/repos/huggingface/datasets/issues/2906/events
https://github.com/huggingface/datasets/pull/2906
995,962,905
PR_kwDODunzps4rulH-
2,906
feat: 🎸 add a function to get a dataset config's split names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is" ]
1,631,622,682,000
1,632,155,739,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2906", "html_url": "https://github.com/huggingface/datasets/pull/2906", "diff_url": "https://github.com/huggingface/datasets/pull/2906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2906.patch" }
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - <strike>I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?</strike> no -> reverted - Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
https://api.github.com/repos/huggingface/datasets/issues/2906/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2905/comments
https://api.github.com/repos/huggingface/datasets/issues/2905/events
https://github.com/huggingface/datasets/pull/2905
995,843,964
PR_kwDODunzps4ruL5X
2,905
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,614,577,000
1,631,622,337,000
1,631,622,337,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2905", "html_url": "https://github.com/huggingface/datasets/pull/2905", "diff_url": "https://github.com/huggingface/datasets/pull/2905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2905.patch" }
Update BibTeX entry.
https://api.github.com/repos/huggingface/datasets/issues/2905/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2904/comments
https://api.github.com/repos/huggingface/datasets/issues/2904/events
https://github.com/huggingface/datasets/issues/2904
995,814,222
I_kwDODunzps47WutO
2,904
FORCE_REDOWNLOAD does not work
{ "login": "anoopkatti", "id": 5278299, "node_id": "MDQ6VXNlcjUyNzgyOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5278299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anoopkatti", "html_url": "https://github.com/anoopkatti", "followers_url": "https://api.github.com/users/anoopkatti/followers", "following_url": "https://api.github.com/users/anoopkatti/following{/other_user}", "gists_url": "https://api.github.com/users/anoopkatti/gists{/gist_id}", "starred_url": "https://api.github.com/users/anoopkatti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anoopkatti/subscriptions", "organizations_url": "https://api.github.com/users/anoopkatti/orgs", "repos_url": "https://api.github.com/users/anoopkatti/repos", "events_url": "https://api.github.com/users/anoopkatti/events{/privacy}", "received_events_url": "https://api.github.com/users/anoopkatti/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue." ]
1,631,612,726,000
1,632,129,275,000
null
NONE
null
null
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
https://api.github.com/repos/huggingface/datasets/issues/2904/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2903/comments
https://api.github.com/repos/huggingface/datasets/issues/2903/events
https://github.com/huggingface/datasets/pull/2903
995,715,191
PR_kwDODunzps4rtxxV
2,903
Fix xpathopen to accept positional arguments
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "thanks!" ]
1,631,606,570,000
1,631,609,481,000
1,631,608,847,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2903", "html_url": "https://github.com/huggingface/datasets/pull/2903", "diff_url": "https://github.com/huggingface/datasets/pull/2903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2903.patch" }
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
https://api.github.com/repos/huggingface/datasets/issues/2903/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2902/comments
https://api.github.com/repos/huggingface/datasets/issues/2902/events
https://github.com/huggingface/datasets/issues/2902
995,254,216
MDU6SXNzdWU5OTUyNTQyMTY=
2,902
Add WIT Dataset
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "@hassiahk is working on it #2810 ", "WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/\r\nhttps://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/", "> @hassiahk is working on it #2810\r\n\r\nThank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. " ]
1,631,561,929,000
1,631,567,400,000
null
CONTRIBUTOR
null
null
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (excerpt from their Github README.md) > - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples. > - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages. > - A collection of diverse set of concepts and real world entities. > - Brings forth challenging real-world test sets. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets/issues/2902/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2901/comments
https://api.github.com/repos/huggingface/datasets/issues/2901/events
https://github.com/huggingface/datasets/issues/2901
995,232,844
MDU6SXNzdWU5OTUyMzI4NDQ=
2,901
Incompatibility with pytest
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!" ]
1,631,560,337,000
1,631,608,847,000
1,631,608,847,000
CONTRIBUTOR
null
null
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pytest test.py ``` ## Expected results It should give something like: ``` collected 1 item test.py . [100%] ======= 1 passed in 3.15s ======= ``` ## Actual results ``` ============================================================================================================================= test session starts ============================================================================================================================== platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml plugins: anyio-3.3.1 collected 1 item tests/queries/test_rows.py . [100%]Traceback (most recent call last): File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module> raise SystemExit(pytest.console_main()) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall return outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main return wrap_session(config, _main) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session config.hook.pytest_sessionfinish( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set f = path.open("w") TypeError: xpathopen() takes 1 positional argument but 2 were given ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
https://api.github.com/repos/huggingface/datasets/issues/2901/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2900/comments
https://api.github.com/repos/huggingface/datasets/issues/2900/events
https://github.com/huggingface/datasets/pull/2900
994,922,580
MDExOlB1bGxSZXF1ZXN0NzMyNzczNDkw
2,900
Fix null sequence encoding
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,541,308,000
1,631,542,663,000
1,631,542,662,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2900", "html_url": "https://github.com/huggingface/datasets/pull/2900", "diff_url": "https://github.com/huggingface/datasets/pull/2900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2900.patch" }
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
https://api.github.com/repos/huggingface/datasets/issues/2900/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2899/comments
https://api.github.com/repos/huggingface/datasets/issues/2899/events
https://github.com/huggingface/datasets/issues/2899
994,082,432
MDU6SXNzdWU5OTQwODI0MzI=
2,899
Dataset
{ "login": "rcacho172", "id": 90449239, "node_id": "MDQ6VXNlcjkwNDQ5MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcacho172", "html_url": "https://github.com/rcacho172", "followers_url": "https://api.github.com/users/rcacho172/followers", "following_url": "https://api.github.com/users/rcacho172/following{/other_user}", "gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions", "organizations_url": "https://api.github.com/users/rcacho172/orgs", "repos_url": "https://api.github.com/users/rcacho172/repos", "events_url": "https://api.github.com/users/rcacho172/events{/privacy}", "received_events_url": "https://api.github.com/users/rcacho172/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,432,333,000
1,631,463,135,000
1,631,463,135,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets/issues/2899/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2898/comments
https://api.github.com/repos/huggingface/datasets/issues/2898/events
https://github.com/huggingface/datasets/issues/2898
994,032,814
MDU6SXNzdWU5OTQwMzI4MTQ=
2,898
Hug emoji
{ "login": "Jackg-08", "id": 90539794, "node_id": "MDQ6VXNlcjkwNTM5Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/90539794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jackg-08", "html_url": "https://github.com/Jackg-08", "followers_url": "https://api.github.com/users/Jackg-08/followers", "following_url": "https://api.github.com/users/Jackg-08/following{/other_user}", "gists_url": "https://api.github.com/users/Jackg-08/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jackg-08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jackg-08/subscriptions", "organizations_url": "https://api.github.com/users/Jackg-08/orgs", "repos_url": "https://api.github.com/users/Jackg-08/repos", "events_url": "https://api.github.com/users/Jackg-08/events{/privacy}", "received_events_url": "https://api.github.com/users/Jackg-08/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,417,271,000
1,631,463,193,000
1,631,463,193,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets/issues/2898/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2897/comments
https://api.github.com/repos/huggingface/datasets/issues/2897/events
https://github.com/huggingface/datasets/pull/2897
993,798,386
MDExOlB1bGxSZXF1ZXN0NzMxOTA0ODk4
2,897
Add OpenAI's HumanEval dataset
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)" ]
1,631,353,067,000
1,631,804,531,000
1,631,804,531,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2897", "html_url": "https://github.com/huggingface/datasets/pull/2897", "diff_url": "https://github.com/huggingface/datasets/pull/2897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2897.patch" }
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
https://api.github.com/repos/huggingface/datasets/issues/2897/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2896/comments
https://api.github.com/repos/huggingface/datasets/issues/2896/events
https://github.com/huggingface/datasets/pull/2896
993,613,113
MDExOlB1bGxSZXF1ZXN0NzMxNzcwMTE3
2,896
add multi-proc in `to_csv`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,631,309,709,000
1,631,309,709,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2896", "html_url": "https://github.com/huggingface/datasets/pull/2896", "diff_url": "https://github.com/huggingface/datasets/pull/2896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2896.patch" }
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_size 425.6553490161896 Time taken on 1 num_proc, 50000 batch_size 623.5897650718689 Time taken on 4 num_proc, 50000 batch_size 380.0402421951294 Time taken on 4 num_proc, 100000 batch_size 361.7168130874634 ``` This is a WIP as writing tests is pending for this PR. I'm also exploring [this](https://arrow.apache.org/docs/python/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.
https://api.github.com/repos/huggingface/datasets/issues/2896/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2895/comments
https://api.github.com/repos/huggingface/datasets/issues/2895/events
https://github.com/huggingface/datasets/pull/2895
993,462,274
MDExOlB1bGxSZXF1ZXN0NzMxNjQ0NTY2
2,895
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
{ "login": "arsarabi", "id": 12345848, "node_id": "MDQ6VXNlcjEyMzQ1ODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/12345848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arsarabi", "html_url": "https://github.com/arsarabi", "followers_url": "https://api.github.com/users/arsarabi/followers", "following_url": "https://api.github.com/users/arsarabi/following{/other_user}", "gists_url": "https://api.github.com/users/arsarabi/gists{/gist_id}", "starred_url": "https://api.github.com/users/arsarabi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arsarabi/subscriptions", "organizations_url": "https://api.github.com/users/arsarabi/orgs", "repos_url": "https://api.github.com/users/arsarabi/repos", "events_url": "https://api.github.com/users/arsarabi/events{/privacy}", "received_events_url": "https://api.github.com/users/arsarabi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,296,617,000
1,632,264,601,000
1,632,212,315,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2895", "html_url": "https://github.com/huggingface/datasets/pull/2895", "diff_url": "https://github.com/huggingface/datasets/pull/2895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2895.patch" }
This PR partially addresses #2252. ``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``.
https://api.github.com/repos/huggingface/datasets/issues/2895/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2894/comments
https://api.github.com/repos/huggingface/datasets/issues/2894/events
https://github.com/huggingface/datasets/pull/2894
993,375,654
MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5
2,894
Fix COUNTER dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,290,049,000
1,631,291,265,000
1,631,291,264,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2894", "html_url": "https://github.com/huggingface/datasets/pull/2894", "diff_url": "https://github.com/huggingface/datasets/pull/2894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2894.patch" }
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
https://api.github.com/repos/huggingface/datasets/issues/2894/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2893/comments
https://api.github.com/repos/huggingface/datasets/issues/2893/events
https://github.com/huggingface/datasets/pull/2893
993,342,781
MDExOlB1bGxSZXF1ZXN0NzMxNTQ0NDQz
2,893
add mbpp dataset
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think it's fine to have the original schema" ]
1,631,287,650,000
1,631,784,942,000
1,631,784,942,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2893", "html_url": "https://github.com/huggingface/datasets/pull/2893", "diff_url": "https://github.com/huggingface/datasets/pull/2893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2893.patch" }
This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816. The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. An open question is whether to harmonize the two schemas when loading the dataset or to preserve the original one. Since not all fields are overlapping the schema will not be exactly the same.
https://api.github.com/repos/huggingface/datasets/issues/2893/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2892/comments
https://api.github.com/repos/huggingface/datasets/issues/2892/events
https://github.com/huggingface/datasets/issues/2892
993,274,572
MDU6SXNzdWU5OTMyNzQ1NzI=
2,892
Error when encoding a dataset with None objects with a Sequence feature
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)" ]
1,631,283,103,000
1,631,542,693,000
1,631,542,662,000
MEMBER
null
null
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-40add67f8751> in <module> 2 data = {"a": [[0], None]} 3 features = Features({"a": Sequence(Value("int32"))}) ----> 4 dataset = Dataset.from_dict(data, features=features) [...] ~/datasets/features.py in encode_nested_example(schema, obj) 888 if isinstance(obj, str): # don't interpret a string as a list 889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) --> 890 return [encode_nested_example(schema.feature, o) for o in obj] 891 # Object with special encoding: 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks TypeError: 'NoneType' object is not iterable ``` Instead, if should run without error, as if the `features` were not passed
https://api.github.com/repos/huggingface/datasets/issues/2892/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2891/comments
https://api.github.com/repos/huggingface/datasets/issues/2891/events
https://github.com/huggingface/datasets/pull/2891
993,161,984
MDExOlB1bGxSZXF1ZXN0NzMxMzkwNjM2
2,891
[WIP] Allow dynamic first dimension for ArrayXD
{ "login": "rpowalski", "id": 10357417, "node_id": "MDQ6VXNlcjEwMzU3NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/10357417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rpowalski", "html_url": "https://github.com/rpowalski", "followers_url": "https://api.github.com/users/rpowalski/followers", "following_url": "https://api.github.com/users/rpowalski/following{/other_user}", "gists_url": "https://api.github.com/users/rpowalski/gists{/gist_id}", "starred_url": "https://api.github.com/users/rpowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rpowalski/subscriptions", "organizations_url": "https://api.github.com/users/rpowalski/orgs", "repos_url": "https://api.github.com/users/rpowalski/repos", "events_url": "https://api.github.com/users/rpowalski/events{/privacy}", "received_events_url": "https://api.github.com/users/rpowalski/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,631,274,772,000
1,632,142,453,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2891", "html_url": "https://github.com/huggingface/datasets/pull/2891", "diff_url": "https://github.com/huggingface/datasets/pull/2891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2891.patch" }
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing.
https://api.github.com/repos/huggingface/datasets/issues/2891/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2890/comments
https://api.github.com/repos/huggingface/datasets/issues/2890/events
https://github.com/huggingface/datasets/issues/2890
993,074,102
MDU6SXNzdWU5OTMwNzQxMDI=
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
{ "login": "rcacho172", "id": 90449239, "node_id": "MDQ6VXNlcjkwNDQ5MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcacho172", "html_url": "https://github.com/rcacho172", "followers_url": "https://api.github.com/users/rcacho172/followers", "following_url": "https://api.github.com/users/rcacho172/following{/other_user}", "gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions", "organizations_url": "https://api.github.com/users/rcacho172/orgs", "repos_url": "https://api.github.com/users/rcacho172/repos", "events_url": "https://api.github.com/users/rcacho172/events{/privacy}", "received_events_url": "https://api.github.com/users/rcacho172/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,267,477,000
1,631,274,329,000
1,631,274,329,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets/issues/2890/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2889/comments
https://api.github.com/repos/huggingface/datasets/issues/2889/events
https://github.com/huggingface/datasets/issues/2889
992,968,382
MDU6SXNzdWU5OTI5NjgzODI=
2,889
Coc
{ "login": "Bwiggity", "id": 90444264, "node_id": "MDQ6VXNlcjkwNDQ0MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/90444264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bwiggity", "html_url": "https://github.com/Bwiggity", "followers_url": "https://api.github.com/users/Bwiggity/followers", "following_url": "https://api.github.com/users/Bwiggity/following{/other_user}", "gists_url": "https://api.github.com/users/Bwiggity/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bwiggity/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bwiggity/subscriptions", "organizations_url": "https://api.github.com/users/Bwiggity/orgs", "repos_url": "https://api.github.com/users/Bwiggity/repos", "events_url": "https://api.github.com/users/Bwiggity/events{/privacy}", "received_events_url": "https://api.github.com/users/Bwiggity/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,259,127,000
1,631,274,354,000
1,631,274,354,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets/issues/2889/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2888/comments
https://api.github.com/repos/huggingface/datasets/issues/2888/events
https://github.com/huggingface/datasets/issues/2888
992,676,535
MDU6SXNzdWU5OTI2NzY1MzU=
2,888
v1.11.1 release date
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi ! Probably 1.12 on monday :)\r\n", "@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)" ]
1,631,224,395,000
1,631,477,915,000
1,631,463,339,000
NONE
null
null
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
https://api.github.com/repos/huggingface/datasets/issues/2888/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2887/comments
https://api.github.com/repos/huggingface/datasets/issues/2887/events
https://github.com/huggingface/datasets/pull/2887
992,576,305
MDExOlB1bGxSZXF1ZXN0NzMwODg4MTU3
2,887
#2837 Use cache folder for lockfile
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,631,217,356,000
1,632,231,578,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2887", "html_url": "https://github.com/huggingface/datasets/pull/2887", "diff_url": "https://github.com/huggingface/datasets/pull/2887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2887.patch" }
Fixes #2837 Use a cache folder directory to store the FileLock. The issue was that the lock file was in a readonly folder.
https://api.github.com/repos/huggingface/datasets/issues/2887/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2886/comments
https://api.github.com/repos/huggingface/datasets/issues/2886/events
https://github.com/huggingface/datasets/issues/2886
992,534,632
MDU6SXNzdWU5OTI1MzQ2MzI=
2,886
Hj
{ "login": "Noorasri", "id": 90416328, "node_id": "MDQ6VXNlcjkwNDE2MzI4", "avatar_url": "https://avatars.githubusercontent.com/u/90416328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Noorasri", "html_url": "https://github.com/Noorasri", "followers_url": "https://api.github.com/users/Noorasri/followers", "following_url": "https://api.github.com/users/Noorasri/following{/other_user}", "gists_url": "https://api.github.com/users/Noorasri/gists{/gist_id}", "starred_url": "https://api.github.com/users/Noorasri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Noorasri/subscriptions", "organizations_url": "https://api.github.com/users/Noorasri/orgs", "repos_url": "https://api.github.com/users/Noorasri/repos", "events_url": "https://api.github.com/users/Noorasri/events{/privacy}", "received_events_url": "https://api.github.com/users/Noorasri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,213,932,000
1,631,274,389,000
1,631,274,389,000
NONE
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2886/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2885/comments
https://api.github.com/repos/huggingface/datasets/issues/2885/events
https://github.com/huggingface/datasets/issues/2885
992,160,544
MDU6SXNzdWU5OTIxNjA1NDQ=
2,885
Adding an Elastic Search index to a Dataset
{ "login": "MotzWanted", "id": 36195371, "node_id": "MDQ6VXNlcjM2MTk1Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/36195371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MotzWanted", "html_url": "https://github.com/MotzWanted", "followers_url": "https://api.github.com/users/MotzWanted/followers", "following_url": "https://api.github.com/users/MotzWanted/following{/other_user}", "gists_url": "https://api.github.com/users/MotzWanted/gists{/gist_id}", "starred_url": "https://api.github.com/users/MotzWanted/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MotzWanted/subscriptions", "organizations_url": "https://api.github.com/users/MotzWanted/orgs", "repos_url": "https://api.github.com/users/MotzWanted/repos", "events_url": "https://api.github.com/users/MotzWanted/events{/privacy}", "received_events_url": "https://api.github.com/users/MotzWanted/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env" ]
1,631,190,099,000
1,632,128,781,000
null
NONE
null
null
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s] No error is thrown, but the indexing breaks ~90%. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset from elasticsearch import Elasticsearch es = Elasticsearch() squad = load_dataset('squad', split='validation') index_name = "corpus" es_config = { "settings": { "number_of_shards": 1, "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": { "properties": { "idx" : {"type" : "keyword"}, "title" : {"type" : "keyword"}, "text": { "type": "text", "analyzer": "standard", "similarity": "BM25" }, } }, } class IndexBuilder: """ Elastic search indexing of a corpus """ def __init__( self, *args, #corpus : None, dataset : squad, index_name = str, query = str, config = dict, **kwargs, ): #instantiate HuggingFace dataset self.dataset = dataset #instantiate ElasticSearch config self.config = config self.es = Elasticsearch() self.index_name = index_name self.query = query def elastic_index(self): print(self.es.info) self.es.indices.delete(index=self.index_name, ignore=[400, 404]) search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config) return search_index def exact_match_method(self, index): scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1) return scores, retrieved_examples if __name__ == "__main__": print(type(squad)) Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config) search_index = Index.elastic_index() scores, examples = Index.exact_match_method(search_index) print(scores, examples) for name in squad.column_names: print(type(squad[name])) ``` ## Environment info We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment. Poetry: - Python version: 3.8 - PyArrow: 4.0.1 - Elasticsearch: 7.13.4 - datasets: 1.10.2 Local: - Python version: 3.8 - PyArrow: 3.0.0 - Elasticsearch: 7.7.1 - datasets: 1.7.0
https://api.github.com/repos/huggingface/datasets/issues/2885/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2884/comments
https://api.github.com/repos/huggingface/datasets/issues/2884/events
https://github.com/huggingface/datasets/pull/2884
992,135,698
MDExOlB1bGxSZXF1ZXN0NzMwNTA4MTE1
2,884
Add IC, SI, ER tasks to SUPERB
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ", "Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main\r\nAre we allowed to make these datasets public or would that violate the terms of their use?", "@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us. \nFor example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(", "> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.\r\n> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(\r\n\r\nI think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)?" ]
1,631,188,563,000
1,632,129,478,000
1,632,128,449,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2884", "html_url": "https://github.com/huggingface/datasets/pull/2884", "diff_url": "https://github.com/huggingface/datasets/pull/2884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2884.patch" }
This PR adds 3 additional classification tasks to SUPERB #### Intent Classification Dataset URL seems to be down at the moment :( See the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands #### Speaker Identification Manual download script: ``` mkdir VoxCeleb1 cd VoxCeleb1 wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad cat vox1_dev* > vox1_dev_wav.zip unzip vox1_dev_wav.zip wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip unzip vox1_test_wav.zip # download the official SUPERB train-dev-test split wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt ``` S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification #### Intent Classification Manual download requires going through a slow application process, see the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition #### :warning: Note These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
https://api.github.com/repos/huggingface/datasets/issues/2884/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2883/comments
https://api.github.com/repos/huggingface/datasets/issues/2883/events
https://github.com/huggingface/datasets/pull/2883
991,969,875
MDExOlB1bGxSZXF1ZXN0NzMwMzYzNTQz
2,883
Fix data URLs and metadata in DocRED dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,177,734,000
1,631,532,271,000
1,631,532,271,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2883", "html_url": "https://github.com/huggingface/datasets/pull/2883", "diff_url": "https://github.com/huggingface/datasets/pull/2883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2883.patch" }
The host of `docred` dataset has updated the `dev` data file. This PR: - Updates the dev URL - Updates dataset metadata This PR also fixes the URL of the `train_distant` split, which was wrong. Fix #2882.
https://api.github.com/repos/huggingface/datasets/issues/2883/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2882/comments
https://api.github.com/repos/huggingface/datasets/issues/2882/events
https://github.com/huggingface/datasets/issues/2882
991,800,141
MDU6SXNzdWU5OTE4MDAxNDE=
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
{ "login": "tmpr", "id": 51313597, "node_id": "MDQ6VXNlcjUxMzEzNTk3", "avatar_url": "https://avatars.githubusercontent.com/u/51313597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tmpr", "html_url": "https://github.com/tmpr", "followers_url": "https://api.github.com/users/tmpr/followers", "following_url": "https://api.github.com/users/tmpr/following{/other_user}", "gists_url": "https://api.github.com/users/tmpr/gists{/gist_id}", "starred_url": "https://api.github.com/users/tmpr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tmpr/subscriptions", "organizations_url": "https://api.github.com/users/tmpr/orgs", "repos_url": "https://api.github.com/users/tmpr/repos", "events_url": "https://api.github.com/users/tmpr/events{/privacy}", "received_events_url": "https://api.github.com/users/tmpr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.\r\n\r\nI'm fixing all this.\r\n\r\n" ]
1,631,166,902,000
1,631,532,270,000
1,631,532,270,000
NONE
null
null
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## Expected results The DocRED dataset should be loaded without any problems. ## Actual results ``` NonMatchingChecksumError Traceback (most recent call last) <ipython-input-4-b1b83f25a16c> in <module> ----> 1 d = datasets.load_dataset('docred') ~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 845 846 # Download and prepare data --> 847 builder_instance.download_and_prepare( 848 download_config=download_config, 849 download_mode=download_mode, ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 613 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 614 if not downloaded_from_gcs: --> 615 self._download_and_prepare( 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 673 # Checksums verification 674 if verify_infos: --> 675 verify_checksums( 676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 677 ) ~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7'] ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0 This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`. ## Remarks - I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache. - The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.
https://api.github.com/repos/huggingface/datasets/issues/2882/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2881/comments
https://api.github.com/repos/huggingface/datasets/issues/2881/events
https://github.com/huggingface/datasets/pull/2881
991,639,142
MDExOlB1bGxSZXF1ZXN0NzMwMDc1OTAy
2,881
Add BIOSSES dataset
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,147,736,000
1,631,542,840,000
1,631,542,840,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2881", "html_url": "https://github.com/huggingface/datasets/pull/2881", "diff_url": "https://github.com/huggingface/datasets/pull/2881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2881.patch" }
Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021"
https://api.github.com/repos/huggingface/datasets/issues/2881/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2880/comments
https://api.github.com/repos/huggingface/datasets/issues/2880/events
https://github.com/huggingface/datasets/pull/2880
990,877,940
MDExOlB1bGxSZXF1ZXN0NzI5NDIzMDMy
2,880
Extend support for streaming datasets that use pathlib.Path stem/suffix
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,090,563,000
1,631,193,209,000
1,631,193,209,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2880", "html_url": "https://github.com/huggingface/datasets/pull/2880", "diff_url": "https://github.com/huggingface/datasets/pull/2880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2880.patch" }
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`. Related to #2876, #2874, #2866. CC: @severo
https://api.github.com/repos/huggingface/datasets/issues/2880/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2879/comments
https://api.github.com/repos/huggingface/datasets/issues/2879/events
https://github.com/huggingface/datasets/issues/2879
990,257,404
MDU6SXNzdWU5OTAyNTc0MDQ=
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
{ "login": "rcgale", "id": 2279700, "node_id": "MDQ6VXNlcjIyNzk3MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2279700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcgale", "html_url": "https://github.com/rcgale", "followers_url": "https://api.github.com/users/rcgale/followers", "following_url": "https://api.github.com/users/rcgale/following{/other_user}", "gists_url": "https://api.github.com/users/rcgale/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcgale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcgale/subscriptions", "organizations_url": "https://api.github.com/users/rcgale/orgs", "repos_url": "https://api.github.com/users/rcgale/repos", "events_url": "https://api.github.com/users/rcgale/events{/privacy}", "received_events_url": "https://api.github.com/users/rcgale/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that should work.\r\n\r\nOn the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) with the correct version of `datasets`?", "I just proposed a change in the blog post.\r\n\r\nI had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.\r\n\r\nI still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem (\"Error: the requested data set requires `datasets>=1.5.0`.\"). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data.", "Also, thank you for a quick and helpful reply!" ]
1,631,040,825,000
1,631,120,119,000
1,631,092,348,000
NONE
null
null
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_dataset timit = load_dataset("timit_asr", cache_dir="./temp") unique_transcripts = set(timit["train"]["text"]) print(unique_transcripts) assert len(unique_transcripts) > 1 ``` ## Expected results Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it. ## Actual results Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore." ## Environment info - `datasets` version: 1.4.1 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.9 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: tried both - Using distributed or parallel set-up in script?: no -
https://api.github.com/repos/huggingface/datasets/issues/2879/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2878/comments
https://api.github.com/repos/huggingface/datasets/issues/2878/events
https://github.com/huggingface/datasets/issues/2878
990,093,316
MDU6SXNzdWU5OTAwOTMzMTY=
2,878
NotADirectoryError: [WinError 267] During load_from_disk
{ "login": "Grassycup", "id": 1875064, "node_id": "MDQ6VXNlcjE4NzUwNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1875064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Grassycup", "html_url": "https://github.com/Grassycup", "followers_url": "https://api.github.com/users/Grassycup/followers", "following_url": "https://api.github.com/users/Grassycup/following{/other_user}", "gists_url": "https://api.github.com/users/Grassycup/gists{/gist_id}", "starred_url": "https://api.github.com/users/Grassycup/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Grassycup/subscriptions", "organizations_url": "https://api.github.com/users/Grassycup/orgs", "repos_url": "https://api.github.com/users/Grassycup/repos", "events_url": "https://api.github.com/users/Grassycup/events{/privacy}", "received_events_url": "https://api.github.com/users/Grassycup/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,631,027,705,000
1,631,027,705,000
null
NONE
null
null
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3 from datasets import load_from_disk from datasets.filesystems import S3FileSystem s3_file = "output of save_to_disk" s3_filesystem = S3FileSystem() load_from_disk(s3_file, fs=s3_filesystem) ``` ## Expected results load_from_disk succeeds without error ## Actual results Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it. ``` Exception ignored in: <finalize object at 0x26409231ce0; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' Exception ignored in: <finalize object at 0x264091c7880; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
https://api.github.com/repos/huggingface/datasets/issues/2878/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2877/comments
https://api.github.com/repos/huggingface/datasets/issues/2877/events
https://github.com/huggingface/datasets/issues/2877
990,027,249
MDU6SXNzdWU5OTAwMjcyNDk=
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,631,023,744,000
1,631,023,744,000
null
MEMBER
null
null
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure) However any data files in a folder named "dummy" should be ignored as well as they should only be used to test the dataset. Same for "dataset_infos.json" which should only be used to get the `dataset.info`
https://api.github.com/repos/huggingface/datasets/issues/2877/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2876/comments
https://api.github.com/repos/huggingface/datasets/issues/2876/events
https://github.com/huggingface/datasets/pull/2876
990,001,079
MDExOlB1bGxSZXF1ZXN0NzI4NjU3MDc2
2,876
Extend support for streaming datasets that use pathlib.Path.glob
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I am thinking that ideally we should call `fs.glob()` instead...", "Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs." ]
1,631,022,225,000
1,631,267,449,000
1,631,267,448,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2876", "html_url": "https://github.com/huggingface/datasets/pull/2876", "diff_url": "https://github.com/huggingface/datasets/pull/2876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2876.patch" }
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
https://api.github.com/repos/huggingface/datasets/issues/2876/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2875/comments
https://api.github.com/repos/huggingface/datasets/issues/2875/events
https://github.com/huggingface/datasets/issues/2875
989,919,398
MDU6SXNzdWU5ODk5MTkzOTg=
2,875
Add Congolese Swahili speech datasets
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
null
[]
null
[]
1,631,016,830,000
1,631,016,830,000
null
NONE
null
null
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/1435196393631764482
https://api.github.com/repos/huggingface/datasets/issues/2875/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2874/comments
https://api.github.com/repos/huggingface/datasets/issues/2874/events
https://github.com/huggingface/datasets/pull/2874
989,685,328
MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4
2,874
Support streaming datasets that use pathlib
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```", "@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... 😅 ", "No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!" ]
1,631,000,149,000
1,631,039,122,000
1,631,014,875,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2874", "html_url": "https://github.com/huggingface/datasets/pull/2874", "diff_url": "https://github.com/huggingface/datasets/pull/2874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2874.patch" }
This PR extends the support in streaming mode for datasets that use `pathlib.Path`. Related to: #2866. CC: @severo
https://api.github.com/repos/huggingface/datasets/issues/2874/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2873/comments
https://api.github.com/repos/huggingface/datasets/issues/2873/events
https://github.com/huggingface/datasets/pull/2873
989,587,695
MDExOlB1bGxSZXF1ZXN0NzI4MzA0MTMw
2,873
adding swedish_medical_ner
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?", "Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset" ]
1,630,989,893,000
1,631,911,657,000
1,631,911,657,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2873", "html_url": "https://github.com/huggingface/datasets/pull/2873", "diff_url": "https://github.com/huggingface/datasets/pull/2873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2873.patch" }
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" Code refactored
https://api.github.com/repos/huggingface/datasets/issues/2873/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2872/comments
https://api.github.com/repos/huggingface/datasets/issues/2872/events
https://github.com/huggingface/datasets/pull/2872
989,453,069
MDExOlB1bGxSZXF1ZXN0NzI4MTkzMjkz
2,872
adding swedish_medical_ner
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,630,965,652,000
1,630,989,392,000
1,630,989,392,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2872", "html_url": "https://github.com/huggingface/datasets/pull/2872", "diff_url": "https://github.com/huggingface/datasets/pull/2872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2872.patch" }
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
https://api.github.com/repos/huggingface/datasets/issues/2872/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2871/comments
https://api.github.com/repos/huggingface/datasets/issues/2871/events
https://github.com/huggingface/datasets/issues/2871
989,436,088
MDU6SXNzdWU5ODk0MzYwODg=
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.", "Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n", "Sorted. Thanks!", "Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps://github.com/huggingface/datasets/pull/2873", "Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci/circleci: run_dataset_script_tests_pyarrow_1\" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue." ]
1,630,962,417,000
1,631,091,112,000
1,631,091,112,000
CONTRIBUTOR
null
null
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS. ``` import datasets datasets.config.PYARROW_VERSION.major --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module> 1 import datasets ----> 2 datasets.config.PYARROW_VERSION.major AttributeError: 'str' object has no attribute 'major' ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 4.0.1
https://api.github.com/repos/huggingface/datasets/issues/2871/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2870/comments
https://api.github.com/repos/huggingface/datasets/issues/2870/events
https://github.com/huggingface/datasets/pull/2870
988,276,859
MDExOlB1bGxSZXF1ZXN0NzI3MjI4Njk5
2,870
Fix three typos in two files for documentation
{ "login": "leny-mi", "id": 25124853, "node_id": "MDQ6VXNlcjI1MTI0ODUz", "avatar_url": "https://avatars.githubusercontent.com/u/25124853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leny-mi", "html_url": "https://github.com/leny-mi", "followers_url": "https://api.github.com/users/leny-mi/followers", "following_url": "https://api.github.com/users/leny-mi/following{/other_user}", "gists_url": "https://api.github.com/users/leny-mi/gists{/gist_id}", "starred_url": "https://api.github.com/users/leny-mi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leny-mi/subscriptions", "organizations_url": "https://api.github.com/users/leny-mi/orgs", "repos_url": "https://api.github.com/users/leny-mi/repos", "events_url": "https://api.github.com/users/leny-mi/events{/privacy}", "received_events_url": "https://api.github.com/users/leny-mi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,630,756,183,000
1,630,916,481,000
1,630,916,375,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2870", "html_url": "https://github.com/huggingface/datasets/pull/2870", "diff_url": "https://github.com/huggingface/datasets/pull/2870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2870.patch" }
Changed "bacth_size" to "batch_size" (2x) Changed "intsructions" to "instructions"
https://api.github.com/repos/huggingface/datasets/issues/2870/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2869/comments
https://api.github.com/repos/huggingface/datasets/issues/2869/events
https://github.com/huggingface/datasets/issues/2869
987,676,420
MDU6SXNzdWU5ODc2NzY0MjA=
2,869
TypeError: 'NoneType' object is not callable
{ "login": "Chenfei-Kang", "id": 40911446, "node_id": "MDQ6VXNlcjQwOTExNDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/40911446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Chenfei-Kang", "html_url": "https://github.com/Chenfei-Kang", "followers_url": "https://api.github.com/users/Chenfei-Kang/followers", "following_url": "https://api.github.com/users/Chenfei-Kang/following{/other_user}", "gists_url": "https://api.github.com/users/Chenfei-Kang/gists{/gist_id}", "starred_url": "https://api.github.com/users/Chenfei-Kang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Chenfei-Kang/subscriptions", "organizations_url": "https://api.github.com/users/Chenfei-Kang/orgs", "repos_url": "https://api.github.com/users/Chenfei-Kang/repos", "events_url": "https://api.github.com/users/Chenfei-Kang/events{/privacy}", "received_events_url": "https://api.github.com/users/Chenfei-Kang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details and environment info (platform, PyArrow version)?", "> Hi, @Chenfei-Kang.\r\n> \r\n> I'm sorry, but I'm not able to reproduce your bug:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> ds = load_dataset(\"glue\", 'cola')\r\n> ds\r\n> ```\r\n> \r\n> ```\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 8551\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1043\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1063\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Could you please give more details and environment info (platform, PyArrow version)?\r\n\r\nSorry to reply you so late.\r\nplatform: pycharm 2021 + anaconda with python 3.7\r\nPyArrow version: 5.0.0\r\nhuggingface-hub: 0.0.16\r\ndatasets: 1.9.0\r\n", "- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?", "> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?\r\n\r\n1. For the platform, here are the output:\r\n - datasets` version: 1.11.0\r\n - Platform: Windows-10-10.0.19041-SP0\r\n - Python version: 3.7.10\r\n - PyArrow version: 5.0.0\r\n2. For the code and error:\r\n ```python\r\n from datasets import load_dataset, load_metric\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n ```\r\n ```python\r\n Traceback (most recent call last):\r\n ....\r\n ....\r\n File \"my_file.py\", line 2, in <module>\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n TypeError: 'NoneType' object is not callable\r\n ```\r\n Thank you!", "For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.", "One naive question: do you have internet access from the machine where you execute the code?", "> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.\r\n\r\nBut I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much!" ]
1,630,668,459,000
1,631,102,998,000
1,631,093,095,000
NONE
null
null
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: - Python version: 3.7 - PyArrow version:
https://api.github.com/repos/huggingface/datasets/issues/2869/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2868/comments
https://api.github.com/repos/huggingface/datasets/issues/2868/events
https://github.com/huggingface/datasets/issues/2868
987,139,146
MDU6SXNzdWU5ODcxMzkxNDY=
2,868
Add Common Objects in 3D (CO3D)
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,630,614,972,000
1,630,614,972,000
null
CONTRIBUTOR
null
null
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)* - **Motivation:** *excerpt from above blog post:* > As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences. > > Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model. > Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets/issues/2868/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2867/comments
https://api.github.com/repos/huggingface/datasets/issues/2867/events
https://github.com/huggingface/datasets/pull/2867
986,971,224
MDExOlB1bGxSZXF1ZXN0NzI2MTE3NzAw
2,867
Add CaSiNo dataset
{ "login": "kushalchawla", "id": 8416863, "node_id": "MDQ6VXNlcjg0MTY4NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8416863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kushalchawla", "html_url": "https://github.com/kushalchawla", "followers_url": "https://api.github.com/users/kushalchawla/followers", "following_url": "https://api.github.com/users/kushalchawla/following{/other_user}", "gists_url": "https://api.github.com/users/kushalchawla/gists{/gist_id}", "starred_url": "https://api.github.com/users/kushalchawla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kushalchawla/subscriptions", "organizations_url": "https://api.github.com/users/kushalchawla/orgs", "repos_url": "https://api.github.com/users/kushalchawla/repos", "events_url": "https://api.github.com/users/kushalchawla/events{/privacy}", "received_events_url": "https://api.github.com/users/kushalchawla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.", "Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something?", "Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;)" ]
1,630,602,383,000
1,631,805,174,000
1,631,784,224,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2867", "html_url": "https://github.com/huggingface/datasets/pull/2867", "diff_url": "https://github.com/huggingface/datasets/pull/2867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2867.patch" }
Hi. I request you to add our dataset to the repository. This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
https://api.github.com/repos/huggingface/datasets/issues/2867/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2866/comments
https://api.github.com/repos/huggingface/datasets/issues/2866/events
https://github.com/huggingface/datasets/issues/2866
986,706,676
MDU6SXNzdWU5ODY3MDY2NzY=
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.", "OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)?", "We should definitely support datasets using `pathlib` in streaming mode...\r\n\r\nFor non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654.", "Hi @severo, please note that \"counter\" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:\r\n- #2874\r\n- #2876\r\n- #2880\r\n\r\nI have tested it. 😉 ", "Now (on master), we get:\r\n\r\n```\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n```\r\n\r\nThe error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!\r\n", "Note that we might want to open an issue to fix the \"counter\" dataset by itself, but I let it up to you.", "Fixed here: https://github.com/huggingface/datasets/pull/2894. Thanks @albertvillanova " ]
1,630,588,253,000
1,631,291,369,000
1,631,277,085,000
CONTRIBUTOR
null
null
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9... Traceback (most recent call last): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split for key, record in utils.tqdm( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__ for obj in iterable: File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples with derived_file.open(encoding="utf-8") as f: File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare raise OSError( OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' ``` ```python >>> import datasets as ds >>> b = ds.load_dataset('counter', split="train", streaming=True) Using custom data configuration default >>> list(b) [] ``` ## Expected results An exception should be raised in streaming mode ## Actual results No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty. ## Environment info - `datasets` version: 1.11.1.dev0 - Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
https://api.github.com/repos/huggingface/datasets/issues/2866/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2865/comments
https://api.github.com/repos/huggingface/datasets/issues/2865/events
https://github.com/huggingface/datasets/pull/2865
986,460,698
MDExOlB1bGxSZXF1ZXN0NzI1NjY1ODgx
2,865
Add MultiEURLEX dataset
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ", "Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.\r\n\r\nI would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. ", "Thanks for the changes :)\r\n\r\nRegarding the labels:\r\n\r\nIf you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.\r\nThe advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.\r\n\r\nLet me know if that sounds good to you or if you still want to stick with the labels as they are now.", "Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages')\r\n# Read strs from the labels (list of integers) for the 1st sample of the training split\r\n```\r\n\r\nI would like to include this in the README file.\r\n\r\nCould you also provide some info on how I could define the supervized key to automate model training, as you said?\r\n\r\nThanks!", "Thanks for the update :)\r\n\r\nHere is an example of usage:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages', split='train')\r\nclasslabel = dataset.features[\"labels\"].feature\r\nprint(dataset[0][\"labels\"])\r\n# [1, 20, 7, 3, 0]\r\nprint(classlabel.int2str(dataset[0][\"labels\"]))\r\n# ['100160', '100155', '100158', '100147', '100149']\r\n```\r\n\r\nThe ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p \r\n\r\nI think one last thing to do is just update the `dataset_infos.json` file and we'll be good !", "Everything is ready! 👍 \r\n" ]
1,630,575,744,000
1,631,274,606,000
1,631,274,606,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2865", "html_url": "https://github.com/huggingface/datasets/pull/2865", "diff_url": "https://github.com/huggingface/datasets/pull/2865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2865.patch" }
**Add new MultiEURLEX Dataset** MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
https://api.github.com/repos/huggingface/datasets/issues/2865/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2864/comments
https://api.github.com/repos/huggingface/datasets/issues/2864/events
https://github.com/huggingface/datasets/pull/2864
986,159,438
MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw
2,864
Fix data URL in ToTTo dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/8", "html_url": "https://github.com/huggingface/datasets/milestone/8", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "id": 6968069, "node_id": "MI_kwDODunzps4AalMF", "number": 8, "title": "1.12", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 5, "closed_issues": 1, "state": "open", "created_at": 1626881696000, "updated_at": 1630565260000, "due_on": 1630306800000, "closed_at": null }
[]
1,630,560,308,000
1,630,565,260,000
1,630,565,260,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864", "html_url": "https://github.com/huggingface/datasets/pull/2864", "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch" }
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
https://api.github.com/repos/huggingface/datasets/issues/2864/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2863/comments
https://api.github.com/repos/huggingface/datasets/issues/2863/events
https://github.com/huggingface/datasets/pull/2863
986,156,755
MDExOlB1bGxSZXF1ZXN0NzI1MzkwMTkx
2,863
Update dataset URL
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. 😉 " ]
1,630,560,138,000
1,630,570,250,000
1,630,570,250,000
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2863", "html_url": "https://github.com/huggingface/datasets/pull/2863", "diff_url": "https://github.com/huggingface/datasets/pull/2863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2863.patch" }
null
https://api.github.com/repos/huggingface/datasets/issues/2863/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2862/comments
https://api.github.com/repos/huggingface/datasets/issues/2862/events
https://github.com/huggingface/datasets/issues/2862
985,763,001
MDU6SXNzdWU5ODU3NjMwMDE=
2,862
Only retain relevant statistics in certain metrics
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,630,534,690,000
1,630,534,690,000
null
NONE
null
null
**Is your feature request related to a problem? Please describe.** As I understand, in the `add_batch()` function, the raw predictions and references are kept (in memory?) until `compute()` is called. https://github.com/huggingface/datasets/blob/e248247518140d5b0527ce2843a1a327e2902059/src/datasets/metric.py#L423-L442 This takes O(n) memory. However, for many (most?) metrics, this is not necessary. E.g., for accuracy, only the # correct and # total need to be recorded. **Describe the solution you'd like** Probably an inheritance hierarchy where `"predictions"` and `"references"` are not always the two keys for the final metric computation. Each metric should create and maintain its own relevant statistics, again for example, `"n_correct"` and `"n_total"` for accuracy. I believe the metrics in AllenNLP (https://github.com/allenai/allennlp/tree/39c40fe38cd2fd36b3465b0b3c031f54ec824160/allennlp/training/metrics) can be used as a good reference. **Describe alternatives you've considered** At least `Metric.compute()` shouldn't hard-code `"predictions"` and `"references"` so that custom subclasses may override this behavior. https://github.com/huggingface/datasets/blob/e248247518140d5b0527ce2843a1a327e2902059/src/datasets/metric.py#L399-L400
https://api.github.com/repos/huggingface/datasets/issues/2862/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2861/comments
https://api.github.com/repos/huggingface/datasets/issues/2861/events
https://github.com/huggingface/datasets/pull/2861
985,081,871
MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw
2,861
fix: 🐛 be more specific when catching exceptions
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n", "Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ", "And what about passing the `timeout` parameter instead?", "It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`", "I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...", "Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case." ]
1,630,498,692,000
1,630,576,416,000
1,630,576,323,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2861", "html_url": "https://github.com/huggingface/datasets/pull/2861", "diff_url": "https://github.com/huggingface/datasets/pull/2861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2861.patch" }
The same specific exception is catched in other parts of the same function.
https://api.github.com/repos/huggingface/datasets/issues/2861/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2860/comments
https://api.github.com/repos/huggingface/datasets/issues/2860/events
https://github.com/huggingface/datasets/issues/2860
985,013,339
MDU6SXNzdWU5ODUwMTMzMzk=
2,860
Cannot download TOTTO dataset
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it." ]
1,630,494,250,000
1,630,565,260,000
1,630,565,260,000
NONE
null
null
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
https://api.github.com/repos/huggingface/datasets/issues/2860/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2859/comments
https://api.github.com/repos/huggingface/datasets/issues/2859/events
https://github.com/huggingface/datasets/issues/2859
984,324,500
MDU6SXNzdWU5ODQzMjQ1MDA=
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
null
[]
null
[ "https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/src/datasets/builder.py#L179-L186" ]
1,630,444,264,000
1,630,484,194,000
null
MEMBER
null
null
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). Instead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)
https://api.github.com/repos/huggingface/datasets/issues/2859/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2858/comments
https://api.github.com/repos/huggingface/datasets/issues/2858/events
https://github.com/huggingface/datasets/pull/2858
984,145,568
MDExOlB1bGxSZXF1ZXN0NzIzNjEzNzQ0
2,858
Fix s3fs version in CI
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,630,433,143,000
1,630,935,215,000
1,630,445,391,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2858", "html_url": "https://github.com/huggingface/datasets/pull/2858", "diff_url": "https://github.com/huggingface/datasets/pull/2858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2858.patch" }
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
https://api.github.com/repos/huggingface/datasets/issues/2858/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2857/comments
https://api.github.com/repos/huggingface/datasets/issues/2857/events
https://github.com/huggingface/datasets/pull/2857
984,093,938
MDExOlB1bGxSZXF1ZXN0NzIzNTY5OTE4
2,857
Update: Openwebtext - update size
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI error in unrelated to this PR and fixed on master" ]
1,630,429,863,000
1,631,007,872,000
1,631,007,872,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2857", "html_url": "https://github.com/huggingface/datasets/pull/2857", "diff_url": "https://github.com/huggingface/datasets/pull/2857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2857.patch" }
Update the size of the Openwebtext dataset I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples) related to #2839
https://api.github.com/repos/huggingface/datasets/issues/2857/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2856/comments
https://api.github.com/repos/huggingface/datasets/issues/2856/events
https://github.com/huggingface/datasets/pull/2856
983,876,734
MDExOlB1bGxSZXF1ZXN0NzIzMzg2NzIw
2,856
fix: 🐛 remove URL's query string only if it's ?dl=1
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,630,417,207,000
1,630,419,732,000
1,630,419,732,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2856", "html_url": "https://github.com/huggingface/datasets/pull/2856", "diff_url": "https://github.com/huggingface/datasets/pull/2856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2856.patch" }
A lot of URL use the query strings, for example http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we must not remove it when trying to detect the protocol. We thus remove it only in the case of the query string being ?dl=1 which occurs on dropbox and dl.orangedox.com. Also: add unit tests. See https://github.com/huggingface/datasets/pull/2843 for the original discussion.
https://api.github.com/repos/huggingface/datasets/issues/2856/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2855/comments
https://api.github.com/repos/huggingface/datasets/issues/2855/events
https://github.com/huggingface/datasets/pull/2855
983,858,229
MDExOlB1bGxSZXF1ZXN0NzIzMzcxMTIy
2,855
Fix windows CI CondaError
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,630,416,122,000
1,630,416,934,000
1,630,416,933,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2855", "html_url": "https://github.com/huggingface/datasets/pull/2855", "diff_url": "https://github.com/huggingface/datasets/pull/2855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2855.patch" }
From this thread: https://github.com/conda/conda/issues/6057 We can fix the conda error ``` CondaError: Cannot link a source that does not exist. C:\Users\...\Anaconda3\Scripts\conda.exe ``` by doing ```bash conda update conda ``` before doing any install in the windows CI
https://api.github.com/repos/huggingface/datasets/issues/2855/timeline
null
true