url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.2B
| node_id
stringlengths 18
32
| number
int64 1
4.12k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,649B
| updated_at
int64 1,587B
1,649B
| closed_at
int64 1,587B
1,649B
β | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4022/comments | https://api.github.com/repos/huggingface/datasets/issues/4022/events | https://github.com/huggingface/datasets/pull/4022 | 1,180,816,682 | PR_kwDODunzps41BNeA | 4,022 | Replace dbpedia_14 data url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,216,041,000 | 1,648,220,617,000 | 1,648,220,329,000 | MEMBER | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4022/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4022",
"html_url": "https://github.com/huggingface/datasets/pull/4022",
"diff_url": "https://github.com/huggingface/datasets/pull/4022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4022.patch",
"merged_at": 1648220329000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4021/comments | https://api.github.com/repos/huggingface/datasets/issues/4021/events | https://github.com/huggingface/datasets/pull/4021 | 1,180,805,092 | PR_kwDODunzps41BLAf | 4,021 | Fix `map` remove_columns on empty dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,215,389,000 | 1,648,561,291,000 | 1,648,560,944,000 | MEMBER | null | On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns:
```python
>>> ds = datasets.load_dataset("glue", "rte")
>>> ds_filtered = ds.filter(lambda x: x["label"] != -1)
>>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"])
>>> print(repr(ds_mapped.column_names))
{
'train': ['sentence1', 'sentence2', 'idx'],
'validation': ['sentence1', 'sentence2', 'idx'],
'test': ['sentence1', 'sentence2', 'label', 'idx']
}
```
I fixed this error and updated the tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4021/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4021",
"html_url": "https://github.com/huggingface/datasets/pull/4021",
"diff_url": "https://github.com/huggingface/datasets/pull/4021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4021.patch",
"merged_at": 1648560944000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4020/comments | https://api.github.com/repos/huggingface/datasets/issues/4020/events | https://github.com/huggingface/datasets/pull/4020 | 1,180,636,754 | PR_kwDODunzps41Am4R | 4,020 | Replace amazon_polarity data URL | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,205,457,000 | 1,648,220,556,000 | 1,648,220,261,000 | MEMBER | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4020",
"html_url": "https://github.com/huggingface/datasets/pull/4020",
"diff_url": "https://github.com/huggingface/datasets/pull/4020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4020.patch",
"merged_at": 1648220261000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4019/comments | https://api.github.com/repos/huggingface/datasets/issues/4019/events | https://github.com/huggingface/datasets/pull/4019 | 1,180,628,293 | PR_kwDODunzps41AlFk | 4,019 | Make yelp_polarity streamable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of the incomplete dataset card - this is unrelated to the goal of this PR so we can ignore it"
] | 1,648,204,971,000 | 1,648,220,539,000 | 1,648,220,236,000 | MEMBER | null | It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4019/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4019",
"html_url": "https://github.com/huggingface/datasets/pull/4019",
"diff_url": "https://github.com/huggingface/datasets/pull/4019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4019.patch",
"merged_at": 1648220235000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4018/comments | https://api.github.com/repos/huggingface/datasets/issues/4018/events | https://github.com/huggingface/datasets/pull/4018 | 1,180,622,816 | PR_kwDODunzps41Aj7g | 4,018 | Replace yelp_review_full data url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,204,638,000 | 1,648,220,462,000 | 1,648,220,170,000 | MEMBER | null | I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive.
Close https://github.com/huggingface/datasets/issues/4005 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4018/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4018",
"html_url": "https://github.com/huggingface/datasets/pull/4018",
"diff_url": "https://github.com/huggingface/datasets/pull/4018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4018.patch",
"merged_at": 1648220170000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4017/comments | https://api.github.com/repos/huggingface/datasets/issues/4017/events | https://github.com/huggingface/datasets/pull/4017 | 1,180,595,160 | PR_kwDODunzps41Ad_L | 4,017 | Support streaming scan dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,203,088,000 | 1,648,210,135,000 | 1,648,209,832,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4017",
"html_url": "https://github.com/huggingface/datasets/pull/4017",
"diff_url": "https://github.com/huggingface/datasets/pull/4017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4017.patch",
"merged_at": 1648209832000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4016/comments | https://api.github.com/repos/huggingface/datasets/issues/4016/events | https://github.com/huggingface/datasets/pull/4016 | 1,180,557,828 | PR_kwDODunzps41AWBk | 4,016 | Support streaming blimp dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,201,150,000 | 1,648,207,158,000 | 1,648,206,853,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4016/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4016",
"html_url": "https://github.com/huggingface/datasets/pull/4016",
"diff_url": "https://github.com/huggingface/datasets/pull/4016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4016.patch",
"merged_at": 1648206853000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4015/comments | https://api.github.com/repos/huggingface/datasets/issues/4015/events | https://github.com/huggingface/datasets/issues/4015 | 1,180,510,856 | I_kwDODunzps5GXSqI | 4,015 | Can not correctly parse the classes with imagefolder | {
"login": "YiSyuanChen",
"id": 21264909,
"node_id": "MDQ6VXNlcjIxMjY0OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YiSyuanChen",
"html_url": "https://github.com/YiSyuanChen",
"followers_url": "https://api.github.com/users/YiSyuanChen/followers",
"following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}",
"gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions",
"organizations_url": "https://api.github.com/users/YiSyuanChen/orgs",
"repos_url": "https://api.github.com/users/YiSyuanChen/repos",
"events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/YiSyuanChen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.",
"HI, I have a question. How much time did you load the ImageNet data files? "
] | 1,648,198,277,000 | 1,648,429,323,000 | 1,648,200,476,000 | NONE | null | ## Describe the bug
I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect.
## Steps to reproduce the bug
I organized my dataset (ImageNet) in the following structure:
```
- imagenet/
- train/
- n01440764/
- ILSVRC2012_val_00000293.jpg
- ......
- n01695060/
- ......
- val/
- n01440764/
- n01695060/
- ......
```
At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as:
```
from datasets import load_dataset
data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'}
ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification")
```
but it resulted following error (I mask my personal path as <PERSONAL_PATH>):
```
FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
Next, I followed a recent issue #3960 to load data as:
```
from datasets import load_dataset
data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']}
ds = load_dataset("imagefolder", data_files=data_files, task="image-classification")
```
and the data can be loaded without error as: (I copy val folder to train folder for illustration)
```
>>> ds
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
val: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
})
```
However, the parsed classes is wrong (should be 1000 classes):
```
>>> ds["train"].features
{'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)}
```
## Expected results
I expect that the "labels" in ds["train"].features should contain 1000 classes.
## Actual results
The "labels" in ds["train"].features contains only 1 wrong class.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Ubuntu 18.04
- Python version: Python 3.7.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4015/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4014/comments | https://api.github.com/repos/huggingface/datasets/issues/4014/events | https://github.com/huggingface/datasets/pull/4014 | 1,180,481,229 | PR_kwDODunzps41AGBu | 4,014 | Support streaming id_clickbait dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,196,308,000 | 1,648,198,711,000 | 1,648,198,412,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4014/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4014",
"html_url": "https://github.com/huggingface/datasets/pull/4014",
"diff_url": "https://github.com/huggingface/datasets/pull/4014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4014.patch",
"merged_at": 1648198412000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4013/comments | https://api.github.com/repos/huggingface/datasets/issues/4013/events | https://github.com/huggingface/datasets/issues/4013 | 1,180,427,174 | I_kwDODunzps5GW-Om | 4,013 | Cannot preview "hazal/Turkish-Biomedical-corpus-trM" | {
"login": "hazalturkmen",
"id": 42860397,
"node_id": "MDQ6VXNlcjQyODYwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hazalturkmen",
"html_url": "https://github.com/hazalturkmen",
"followers_url": "https://api.github.com/users/hazalturkmen/followers",
"following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}",
"gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions",
"organizations_url": "https://api.github.com/users/hazalturkmen/orgs",
"repos_url": "https://api.github.com/users/hazalturkmen/repos",
"events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hazalturkmen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. However, your data file does not have any extension.\r\n\r\nNote that current supported data file extensions are: 'csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'.\r\n\r\nYou have more info on our docs: [How to share a dataset](https://huggingface.co/docs/datasets/share).",
"thanks for reply :)"
] | 1,648,192,322,000 | 1,649,059,501,000 | 1,648,217,771,000 | NONE | null | ## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4013/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4012/comments | https://api.github.com/repos/huggingface/datasets/issues/4012/events | https://github.com/huggingface/datasets/pull/4012 | 1,180,350,083 | PR_kwDODunzps40_qgo | 4,012 | Rename wer to cer | {
"login": "pmgautam",
"id": 28428143,
"node_id": "MDQ6VXNlcjI4NDI4MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28428143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmgautam",
"html_url": "https://github.com/pmgautam",
"followers_url": "https://api.github.com/users/pmgautam/followers",
"following_url": "https://api.github.com/users/pmgautam/following{/other_user}",
"gists_url": "https://api.github.com/users/pmgautam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmgautam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmgautam/subscriptions",
"organizations_url": "https://api.github.com/users/pmgautam/orgs",
"repos_url": "https://api.github.com/users/pmgautam/repos",
"events_url": "https://api.github.com/users/pmgautam/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmgautam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,184,765,000 | 1,648,475,845,000 | 1,648,475,845,000 | CONTRIBUTOR | null | wer variable changed to cer in README file
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4012/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4012",
"html_url": "https://github.com/huggingface/datasets/pull/4012",
"diff_url": "https://github.com/huggingface/datasets/pull/4012.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4012.patch",
"merged_at": 1648475845000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4011/comments | https://api.github.com/repos/huggingface/datasets/issues/4011/events | https://github.com/huggingface/datasets/pull/4011 | 1,179,885,965 | PR_kwDODunzps40-Ho0 | 4,011 | Fix SQuAD v2 metric docs on `references` format | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_4011). All of your documentation changes will be reflected on that endpoint."
] | 1,648,146,430,000 | 1,648,147,079,000 | null | CONTRIBUTOR | null | `references` it's not a list of dictionaries but a dictionary that has a list in its values. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4011/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4011",
"html_url": "https://github.com/huggingface/datasets/pull/4011",
"diff_url": "https://github.com/huggingface/datasets/pull/4011.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4011.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4010/comments | https://api.github.com/repos/huggingface/datasets/issues/4010/events | https://github.com/huggingface/datasets/pull/4010 | 1,179,848,036 | PR_kwDODunzps409_QV | 4,010 | Fix None issue with Sequence of dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging since I'd like do do a patch release soon for this one"
] | 1,648,144,739,000 | 1,648,462,433,000 | 1,648,462,120,000 | MEMBER | null | `Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict.
```python
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example
return encode_nested_example(self, example)
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in encode_nested_example
return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in <dictcomp>
return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 998, in encode_nested_example
for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj):
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in <genexpr>
yield key, tuple(d[key] for d in dicts)
TypeError: 'NoneType' object is not subscriptable
```
I fixed this issue and updated the tests (this case was missing in the tests) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4010/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4010",
"html_url": "https://github.com/huggingface/datasets/pull/4010",
"diff_url": "https://github.com/huggingface/datasets/pull/4010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4010.patch",
"merged_at": 1648462120000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4009/comments | https://api.github.com/repos/huggingface/datasets/issues/4009/events | https://github.com/huggingface/datasets/issues/4009 | 1,179,658,611 | I_kwDODunzps5GUClz | 4,009 | AMI load_dataset error: sndfile library not found | {
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)"
] | 1,648,134,818,000 | 1,648,136,798,000 | 1,648,135,049,000 | NONE | null | ## Describe the bug
Getting error message when loading AMI dataset.
## Steps to reproduce the bug
`python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
`
## Expected results
A clear and concise description of the expected results.
## Actual results
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset
use_auth_token=use_auth_token,
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
sndfile library not found
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4009/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4008/comments | https://api.github.com/repos/huggingface/datasets/issues/4008/events | https://github.com/huggingface/datasets/pull/4008 | 1,179,591,068 | PR_kwDODunzps409Ixp | 4,008 | Support streaming daily_dialog dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yay! I love this dataset!"
] | 1,648,131,803,000 | 1,648,135,741,000 | 1,648,133,218,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4008/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4008",
"html_url": "https://github.com/huggingface/datasets/pull/4008",
"diff_url": "https://github.com/huggingface/datasets/pull/4008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4008.patch",
"merged_at": 1648133218000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4007/comments | https://api.github.com/repos/huggingface/datasets/issues/4007/events | https://github.com/huggingface/datasets/issues/4007 | 1,179,381,021 | I_kwDODunzps5GS-0d | 4,007 | set_format does not work with multi dimension tensor | {
"login": "phihung",
"id": 5902432,
"node_id": "MDQ6VXNlcjU5MDI0MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phihung",
"html_url": "https://github.com/phihung",
"followers_url": "https://api.github.com/users/phihung/followers",
"following_url": "https://api.github.com/users/phihung/following{/other_user}",
"gists_url": "https://api.github.com/users/phihung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phihung/subscriptions",
"organizations_url": "https://api.github.com/users/phihung/orgs",
"repos_url": "https://api.github.com/users/phihung/repos",
"events_url": "https://api.github.com/users/phihung/events{/privacy}",
"received_events_url": "https://api.github.com/users/phihung/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n",
"Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.\r\n```\r\ndataset = load_dataset(\"my_dataset.py\")\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndict_dataset_test = dataset[\"test\"].to_dict()\r\n...\r\ndataset_test = Dataset.from_dict(dict_dataset_test, features=Features(features))\r\n```\r\nHowever, converting a `Dataset` object to a dict takes quite a lot of time and memory... Is there a way to directly create an `Array2D` without having to transform the original `Dataset` to a dict?",
"Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example), features=Features(features))\r\n```\r\nOr you can use `cast` after `map` to do that:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndataset = dataset.cast(Features(features))\r\n```",
"Fantastic thank you @mariosasko\r\nThe first option you suggested is indeed way faster π "
] | 1,648,121,263,000 | 1,648,625,337,000 | 1,648,132,769,000 | NONE | null | ## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result
ds = ds.with_format("torch")
print(ds[0])
```
## Expected results
```
{'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]}
```
## Actual results
```
{'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- datasets version: 2.0.0
- Platform: Mac OSX
- Python version: 3.8.12
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4007/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4006/comments | https://api.github.com/repos/huggingface/datasets/issues/4006/events | https://github.com/huggingface/datasets/pull/4006 | 1,179,367,195 | PR_kwDODunzps408YnW | 4,006 | Use audio feature in ASR task template | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,120,522,000 | 1,648,142,369,000 | 1,648,140,482,000 | MEMBER | null | The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column.
I changed that and updated all the datasets as well as the tests.
The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero usage unfortunately (probably because users load the duplicate `multilingual_librispeech` directly instead), but it means we can update it.
(this makes me think that we should deprecate `multilingual_librispeech` it and redirect users to `facebook/multilingual_librispeech`).
This PR is also useful for the AudioFolder in https://github.com/huggingface/datasets/pull/3963 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4006/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4006",
"html_url": "https://github.com/huggingface/datasets/pull/4006",
"diff_url": "https://github.com/huggingface/datasets/pull/4006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4006.patch",
"merged_at": 1648140482000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4005/comments | https://api.github.com/repos/huggingface/datasets/issues/4005/events | https://github.com/huggingface/datasets/issues/4005 | 1,179,365,663 | I_kwDODunzps5GS7Ef | 4,005 | Yelp not working | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.97MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nDownloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /home/slesage/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...\r\nDownloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.10k/1.10k [00:00<00:00, 1.39MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 676, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']\r\n\r\n>>> # with streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD, streaming=True)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.53MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 375, in _info\r\n await _file_info(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 736, in _file_info\r\n r.raise_for_status()\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/aiohttp/client_reqrep.py\", line 1000, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://doc-0g-bs-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/gklhpdq1arj8v15qrg7ces34a8c3413d/1648144575000/07511006523564980941/*/0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0?e=download')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1677, in load_dataset\r\n return builder_instance.as_streaming_dataset(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 906, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/yelp_review_full/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43/yelp_review_full.py\", line 102, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 800, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 778, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/py_utils.py\", line 306, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 783, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 372, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/core.py\", line 102, in __enter__\r\n f = self.fs.open(self.path, mode=mode)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/spec.py\", line 978, in open\r\n f = self._open(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 335, in _open\r\n size = size or self.info(path, **kwargs)[\"size\"]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 88, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 69, in sync\r\n raise result[0]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 388, in _info\r\n raise FileNotFoundError(url) from exc\r\nFileNotFoundError: https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0&confirm=t\r\n```\r\n\r\nAnd this is before even trying to access the rows with\r\n\r\n```python\r\n>>> rows = list(itertools.islice(dataset, 100))\r\n>>> rows = list(dataset.take(100))\r\n```",
"Yet another issue related to google drive not being nice. Most likely your IP has been banned from using their API programmatically. Do you know if we are allowed to host and redistribute the data ourselves ?",
"Hi,\r\n\r\nFacing the same issue while loading the dataset: \r\n\r\n`Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files`\r\n\r\nThanks",
"> Facing the same issue while loading the dataset:\r\n> \r\n> Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files\r\n\r\nThanks for reporting. I think this is the same issue. Feel free to try again later, once Google Drive stopped blocking you. You can retry by passing `download_mode=\"force_redownload\"` to `load_dataset`",
"I noticed that FastAI hosts the Yelp dataset at https://s3.amazonaws.com/fast-ai-nlp/yelp_review_full_csv.tgz (from their catalog [here](https://course.fast.ai/datasets))\r\n\r\nLet's update the yelp dataset script to download from there instead of Google Drive",
"I updated the link to not use Google Drive anymore, we will do a release early next week with the updated download url of the dataset :)"
] | 1,648,120,440,000 | 1,648,220,397,000 | 1,648,220,170,000 | MEMBER | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly copy of the dataset: https://huggingface.co/datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co/datasets/yelp_review_full has > 20K downloads.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4005/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4004/comments | https://api.github.com/repos/huggingface/datasets/issues/4004/events | https://github.com/huggingface/datasets/pull/4004 | 1,179,320,795 | PR_kwDODunzps408Onj | 4,004 | ASSIN 2 dataset: replace broken Google Drive _URLS by links on github | {
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,118,259,000 | 1,648,476,106,000 | 1,648,475,799,000 | CONTRIBUTOR | null | Closes #4003 .
Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4004/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4004",
"html_url": "https://github.com/huggingface/datasets/pull/4004",
"diff_url": "https://github.com/huggingface/datasets/pull/4004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4004.patch",
"merged_at": 1648475799000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4003/comments | https://api.github.com/repos/huggingface/datasets/issues/4003/events | https://github.com/huggingface/datasets/issues/4003 | 1,179,286,877 | I_kwDODunzps5GSn1d | 4,003 | ASSIN2 dataset checksum bug | {
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,648,116,530,000 | 1,648,475,799,000 | 1,648,475,799,000 | CONTRIBUTOR | null | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
[<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>()
----> 1 load_dataset('assin2')
4 frames
[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download']
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("assin2")
```
## Expected results
Load the dataset.
## Actual results
The dataset won't load.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Google Colab
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4003/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4002/comments | https://api.github.com/repos/huggingface/datasets/issues/4002/events | https://github.com/huggingface/datasets/pull/4002 | 1,179,263,787 | PR_kwDODunzps408Cfp | 4,002 | Support streaming conll2012_ontonotesv5 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,115,396,000 | 1,648,119,221,000 | 1,648,118,927,000 | MEMBER | null | Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4002",
"html_url": "https://github.com/huggingface/datasets/pull/4002",
"diff_url": "https://github.com/huggingface/datasets/pull/4002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4002.patch",
"merged_at": 1648118927000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4001/comments | https://api.github.com/repos/huggingface/datasets/issues/4001/events | https://github.com/huggingface/datasets/issues/4001 | 1,179,231,418 | I_kwDODunzps5GSaS6 | 4,001 | How to use generate this multitask dataset for SQUAD? I am getting a value error. | {
"login": "gsk1692",
"id": 1963097,
"node_id": "MDQ6VXNlcjE5NjMwOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsk1692",
"html_url": "https://github.com/gsk1692",
"followers_url": "https://api.github.com/users/gsk1692/followers",
"following_url": "https://api.github.com/users/gsk1692/following{/other_user}",
"gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions",
"organizations_url": "https://api.github.com/users/gsk1692/orgs",
"repos_url": "https://api.github.com/users/gsk1692/repos",
"events_url": "https://api.github.com/users/gsk1692/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsk1692/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.",
"Thank You! Was able to solve with the help of this.",
"But I request you to please fix the same in the dataset hub explorer as well...",
"May I ask how to get this dataset?"
] | 1,648,113,711,000 | 1,648,288,101,000 | 1,648,265,743,000 | NONE | null | ## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine.
I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format')
Error:
Status code: 400
Exception: TypeError
Message: argument of type 'Value' is not iterable
Kindly advice.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4001/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4000/comments | https://api.github.com/repos/huggingface/datasets/issues/4000/events | https://github.com/huggingface/datasets/issues/4000 | 1,178,844,616 | I_kwDODunzps5GQ73I | 4,000 | load_dataset error: sndfile library not found | {
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third-party dependency on `torchaudio`.\r\n\r\nYou have all the information in our docs: https://huggingface.co/docs/datasets/audio_process#installation",
"Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?\r\n\r\n```\r\npip3 install datasets[audio]\r\n```\r\n### log\r\nRequirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)\r\nRequirement already satisfied: numpy>=1.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.21.5)\r\nRequirement already satisfied: xxhash in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.0.0)\r\nRequirement already satisfied: fsspec[http]>=2021.05.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2022.2.0)\r\nRequirement already satisfied: dill in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.3.4)\r\nRequirement already satisfied: pandas in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.3.5)\r\nRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.4.0)\r\nRequirement already satisfied: packaging in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (21.3)\r\nRequirement already satisfied: multiprocess in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.70.12.2)\r\nRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (7.0.0)\r\nRequirement already satisfied: tqdm>=4.62.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.63.1)\r\nRequirement already satisfied: aiohttp in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.8.1)\r\nRequirement already satisfied: importlib-metadata in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.11.3)\r\nRequirement already satisfied: requests>=2.19.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2.27.1)\r\nRequirement already satisfied: librosa in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.9.1)\r\nRequirement already satisfied: pyyaml in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (4.1.1)\r\nRequirement already satisfied: filelock in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (3.6.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from packaging->datasets[audio]) (3.0.7)\r\nRequirement already satisfied: idna<4,>=2.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (3.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (1.26.9)\r\nRequirement already satisfied: attrs>=17.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (21.4.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.3.0)\r\nRequirement already satisfied: aiosignal>=1.1.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.2.0)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.7.2)\r\nRequirement already satisfied: asynctest==0.13.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (0.13.0)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (6.0.2)\r\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (4.0.2)\r\nRequirement already satisfied: zipp>=0.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from importlib-metadata->datasets[audio]) (3.7.0)\r\nRequirement already satisfied: decorator>=4.0.10 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (5.1.1)\r\nRequirement already satisfied: soundfile>=0.10.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.55.1)\r\nRequirement already satisfied: pooch>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.6.0)\r\nRequirement already satisfied: resampy>=0.2.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.2.2)\r\nRequirement already satisfied: audioread>=2.1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.1.0)\r\nRequirement already satisfied: scipy>=1.2.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.7.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.0.2)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2022.1)\r\nRequirement already satisfied: setuptools in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (0.38.0)\r\nRequirement already satisfied: appdirs>=1.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa->datasets[audio]) (1.4.4)\r\nRequirement already satisfied: six>=1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets[audio]) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa->datasets[audio]) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa->datasets[audio]) (1.15.0)\r\nRequirement already satisfied: pycparser in ./.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa->datasets[audio]) (2.21)\r\n\r\n### reload\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### log\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n\r\n### just to double-check as per your docs\r\n```\r\npip3 install librosa torchaudio\r\n```\r\n\r\n### logs\r\nRequirement already satisfied: librosa in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.9.1)\r\nRequirement already satisfied: torchaudio in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.11.0+cu113)\r\nRequirement already satisfied: audioread>=2.1.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.1.0)\r\nRequirement already satisfied: packaging>=20.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (21.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.0.2)\r\nRequirement already satisfied: scipy>=1.2.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.7.3)\r\nRequirement already satisfied: decorator>=4.0.10 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (5.1.1)\r\nRequirement already satisfied: resampy>=0.2.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.2.2)\r\nRequirement already satisfied: pooch>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.6.0)\r\nRequirement already satisfied: numpy>=1.17.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.21.5)\r\nRequirement already satisfied: soundfile>=0.10.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.55.1)\r\nRequirement already satisfied: torch==1.11.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torchaudio) (1.11.0+cu113)\r\nRequirement already satisfied: typing-extensions in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torch==1.11.0->torchaudio) (4.1.1)\r\nRequirement already satisfied: setuptools in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (0.38.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from packaging>=20.0->librosa) (3.0.7)\r\nRequirement already satisfied: requests>=2.19.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (2.27.1)\r\nRequirement already satisfied: appdirs>=1.3.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (1.4.4)\r\nRequirement already satisfied: six>=1.3 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from resampy>=0.2.2->librosa) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa) (1.15.0)\r\nRequirement already satisfied: pycparser in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa) (2.21)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2.0.12)\r\nRequirement already satisfied: certifi>=2017.4.17 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2021.10.8)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (1.26.9)\r\nRequirement already satisfied: idna<4,>=2.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (3.3)\r\n\r\n### try loading again\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### same error\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n",
"Hi @i-am-neo, thanks again for your detailed report.\r\n\r\nOur `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:\r\n```shell\r\npip install datasets[audio]\r\n```\r\n\r\nHowever, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends on a non-Python package called `sndfile`. \r\n\r\nOn Linux (which is your case), this must be installed manually using your operating system package manager, for example:\r\n```shell\r\nsudo apt-get install libsndfile1\r\n```\r\n\r\nPlease, let me know if this works and if so, I will update our docs with all this information.",
"@albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously."
] | 1,648,086,752,000 | 1,648,230,813,000 | 1,648,230,813,000 | NONE | null | ## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...
AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1.
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 136/136 [00:00<00:00, 36004.88it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 136/136 [00:01<00:00, 79.10it/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:00<00:00, 25343.23it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18/18 [00:00<00:00, 2874.78it/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 27950.38it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 2892.25it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset
use_auth_token=use_auth_token,
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
sndfile library not found
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4000/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3999/comments | https://api.github.com/repos/huggingface/datasets/issues/3999/events | https://github.com/huggingface/datasets/pull/3999 | 1,178,685,280 | PR_kwDODunzps406WN_ | 3,999 | Docs maintenance | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,070,853,000 | 1,648,659,705,000 | 1,648,659,398,000 | MEMBER | null | This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3999/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3999",
"html_url": "https://github.com/huggingface/datasets/pull/3999",
"diff_url": "https://github.com/huggingface/datasets/pull/3999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3999.patch",
"merged_at": 1648659398000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3998/comments | https://api.github.com/repos/huggingface/datasets/issues/3998/events | https://github.com/huggingface/datasets/pull/3998 | 1,178,631,986 | PR_kwDODunzps406KyA | 3,998 | Fix Audio.encode_example() when writing an array | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova do you think [this line](https://github.com/huggingface/datasets/pull/3998/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R67) is enough? that's why we missed this bug, we didn't check this case"
] | 1,648,067,533,000 | 1,648,563,704,000 | 1,648,563,373,000 | CONTRIBUTOR | null | Closes #3996 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3998",
"html_url": "https://github.com/huggingface/datasets/pull/3998",
"diff_url": "https://github.com/huggingface/datasets/pull/3998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3998.patch",
"merged_at": 1648563373000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3997/comments | https://api.github.com/repos/huggingface/datasets/issues/3997/events | https://github.com/huggingface/datasets/pull/3997 | 1,178,566,568 | PR_kwDODunzps4058xr | 3,997 | Sync Features dictionaries | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3997). All of your documentation changes will be reflected on that endpoint."
] | 1,648,063,431,000 | 1,648,064,060,000 | null | CONTRIBUTOR | null | This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731).
A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delitem__`, but this PR doesn't implement it for the following reasons:
* it requires replacing all occurrences of `isinstance(obj, dict)` with `isinstance(obj, Mapping)`, which is five times slower than `isinstance(obj, dict)` on my machine, in `features.py`
* is a breaking change, i.e., `isinstance(Features(...), dict)` would return `False` after it
* IMO, it makes sense to be consistent in the user-facing API and subclass either `dict` or `UserDict`. The problem with the latter is that it can't be used for `DatasetDict` because `DatasetDict` exposes the `data` property, which is also used by `UserDict`, so this would result in a collision.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3997",
"html_url": "https://github.com/huggingface/datasets/pull/3997",
"diff_url": "https://github.com/huggingface/datasets/pull/3997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3997.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3996/comments | https://api.github.com/repos/huggingface/datasets/issues/3996/events | https://github.com/huggingface/datasets/issues/3996 | 1,178,415,905 | I_kwDODunzps5GPTMh | 3,996 | Audio.encode_example() throws an error when writing example from array | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Good catch ! Yes I think passing `format=\"wav\"` is the right thing to do",
"Thanks @polinaeterna for reporting this issue.\r\n\r\nIn relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio. But yes, nice to give an alternative to non-torchaudio users (with a big warning on performance).",
"> I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio.\r\n\r\nYeah, I know, but as far as I understand, some users just categorically don't want to have torchaudio in their environment. Anyway, it's just a more or less random example, they can use any library they like following the same logic (I'm just not a big expert in decoding utils so if you can give me some presentation / resources about that I would really appreciate it π€)"
] | 1,648,055,507,000 | 1,648,563,373,000 | 1,648,563,373,000 | CONTRIBUTOR | null | ## Describe the bug
When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error:
`TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>`
## Steps to reproduce the bug
### Sample code to reproduce the bug
```python
# download sample file
!wget https://huggingface.co/datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3
arr, sr = librosa.load("common_voice_vi_21824030.mp3")
Audio().encode_example({
"path": "common_voice_vi_21824030.mp3",
"array": arr,
"sampling_rate":sr
})
```
## Expected results
An encoded example (`{"bytes": b'....', "path": 'path'}`)
## Actual results
```python
TypeError Traceback (most recent call last)
Input In [3], in <module>
1 arr, sr = librosa.load("common_voice_vi_21824030.mp3")
----> 3 Audio().encode_example({
4 "path": "common_voice_vi_21824030.mp3",
5 "array": arr,
6 "sampling_rate":sr
7 })
File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value)
73 elif isinstance(value, dict) and "array" in value:
74 buffer = BytesIO()
---> 75 sf.write(buffer, value["array"], value["sampling_rate"])
76 return {"bytes": buffer.getvalue(), "path": value.get("path")}
77 elif value.get("bytes") is not None or value.get("path") is not None:
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd)
312 else:
313 channels = data.shape[1]
--> 314 with SoundFile(file, 'w', samplerate, channels,
315 subtype, endian, format, closefd) as f:
316 f.write(data)
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
625 mode_int = _check_mode(mode)
626 self._mode = mode
--> 627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian)
1414 original_format = format
1415 if format is None:
-> 1416 format = _get_format_from_filename(file, mode)
1417 assert isinstance(format, (_unicode, str))
1418 else:
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode)
1455 pass
1456 if format.upper() not in _formats and 'r' not in mode:
-> 1457 raise TypeError("No format specified and unable to get format from "
1458 "file extension: {0!r}".format(file))
1459 return format
TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets master
- Platform: Ubuntu 20.04
- Python version: python 3.8.12
- PyArrow version: 6.0.1
## Solution
I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this:
```python
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
```
BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this:
```python
from datasets import load_dataset, Features, Audio
ds = load_dataset("common_voice", "vi", split="test")
ds = ds.remove_columns("audio")
ds.select(range(3)) # 3 samples just for testing
def load_mp3_with_librosa(example):
arr, sr = librosa.load(example["path"])
example["audio"] = {
"path": example["path"],
"array": arr,
"sampling_rate": sr
}
return example
updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example),
features=Features(
{"audio": Audio(decode=False)}
))
```
@lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? π€ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3996/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3996/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3995/comments | https://api.github.com/repos/huggingface/datasets/issues/3995/events | https://github.com/huggingface/datasets/pull/3995 | 1,178,232,623 | PR_kwDODunzps404054 | 3,995 | Close `PIL.Image` file handler in `Image.decode_example` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,047,108,000 | 1,648,059,892,000 | 1,648,059,567,000 | CONTRIBUTOR | null | Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error.
To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926.
Fix #3985
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3995",
"html_url": "https://github.com/huggingface/datasets/pull/3995",
"diff_url": "https://github.com/huggingface/datasets/pull/3995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3995.patch",
"merged_at": 1648059566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3994/comments | https://api.github.com/repos/huggingface/datasets/issues/3994/events | https://github.com/huggingface/datasets/pull/3994 | 1,178,211,138 | PR_kwDODunzps404wWu | 3,994 | Change audio column from string path to Audio feature in ASR task | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,046,092,000 | 1,648,050,223,000 | 1,648,050,223,000 | CONTRIBUTOR | null | Will fix #3990 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3994/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3994",
"html_url": "https://github.com/huggingface/datasets/pull/3994",
"diff_url": "https://github.com/huggingface/datasets/pull/3994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3994.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3993/comments | https://api.github.com/repos/huggingface/datasets/issues/3993/events | https://github.com/huggingface/datasets/issues/3993 | 1,178,201,495 | I_kwDODunzps5GOe2X | 3,993 | Streaming dataset + interleave + DataLoader hangs with multiple workers | {
"login": "jpilaul",
"id": 614861,
"node_id": "MDQ6VXNlcjYxNDg2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/614861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpilaul",
"html_url": "https://github.com/jpilaul",
"followers_url": "https://api.github.com/users/jpilaul/followers",
"following_url": "https://api.github.com/users/jpilaul/following{/other_user}",
"gists_url": "https://api.github.com/users/jpilaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpilaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpilaul/subscriptions",
"organizations_url": "https://api.github.com/users/jpilaul/orgs",
"repos_url": "https://api.github.com/users/jpilaul/repos",
"events_url": "https://api.github.com/users/jpilaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpilaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Same thing occurs when streaming files loaded from disk.",
"Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :)",
"Hi, thanks for your reply. It seems related :)"
] | 1,648,045,649,000 | 1,648,562,585,000 | null | NONE | null | ## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import interleave_datasets, load_dataset
from torch.utils.data import DataLoader
en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True)
it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True)
de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True)
multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset])
multilingual_dataset = multilingual_dataset.with_format('torch')
next(iter(multilingual_dataset)) # works fairly fast
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4)
for batch in dataloader:
print(len(batch)) # prints nothing after 30 min of waiting
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0)
for batch in dataloader:
print(len(batch)) # prints right away
```
## Expected results
It should be able to iterate the dataset with multiple workers.
## Actual results
Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- `pytorch` version: 1.10.0+cu113
- Python version: 3.7
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3993/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3992/comments | https://api.github.com/repos/huggingface/datasets/issues/3992/events | https://github.com/huggingface/datasets/issues/3992 | 1,177,946,153 | I_kwDODunzps5GNggp | 3,992 | Image column is not decoded in map when using with with_transform | {
"login": "phihung",
"id": 5902432,
"node_id": "MDQ6VXNlcjU5MDI0MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phihung",
"html_url": "https://github.com/phihung",
"followers_url": "https://api.github.com/users/phihung/followers",
"following_url": "https://api.github.com/users/phihung/following{/other_user}",
"gists_url": "https://api.github.com/users/phihung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phihung/subscriptions",
"organizations_url": "https://api.github.com/users/phihung/orgs",
"repos_url": "https://api.github.com/users/phihung/repos",
"events_url": "https://api.github.com/users/phihung/events{/privacy}",
"received_events_url": "https://api.github.com/users/phihung/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with_transform` assign a non-`None` value to it) and the `input_columns` param is not specified (see https://github.com/huggingface/datasets/issues/3756). We will remove these limitations soon.\r\n\r\n\r\n\r\n"
] | 1,648,032,673,000 | 1,648,050,439,000 | null | NONE | null | ## Describe the bug
Image column is not _decoded_ in **map** when using with `with_transform`
## Steps to reproduce the bug
```python
from datasets import Image, Dataset
def add_C(batch):
batch["C"] = batch["A"]
return batch
ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image())
ds = ds.with_transform(lambda x: x) # <= This line causes the problem
ds = ds.map(add_C, batched=True)
print(ds[0])
```
## Expected results
```
{'C': <PIL.PngImagePlugin.PngImageFile>, ...}
```
## Actual results
```
{'C': {'bytes': None, 'path': 'image.png'}, ...}
```
If we remove the `with_transform` line, we get the expected result.
## Environment info
- `datasets` version: 2.0.0
- Platform: Mac OSX
- Python version: 3.8.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3992/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3991/comments | https://api.github.com/repos/huggingface/datasets/issues/3991/events | https://github.com/huggingface/datasets/issues/3991 | 1,177,362,901 | I_kwDODunzps5GLSHV | 3,991 | Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [] | 1,647,987,365,000 | 1,648,040,236,000 | null | NONE | null | ## Adding a Dataset
- **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)*
- **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.*
- **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)*
- **Motivation:** *Key dataset in the healthcare community*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
FYI @osanseviero @abidlabs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3991/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3990/comments | https://api.github.com/repos/huggingface/datasets/issues/3990/events | https://github.com/huggingface/datasets/issues/3990 | 1,176,976,247 | I_kwDODunzps5GJzt3 | 3,990 | Improve AutomaticSpeechRecognition task template | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"There is an open PR to do that: #3364. I just haven't had time to finish it... ",
"> There is an open PR to do that: #3364. I just haven't had time to finish it...\r\n\r\n㪠thanks..."
] | 1,647,963,668,000 | 1,648,055,560,000 | 1,648,055,560,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
[AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created).
**Describe the solution you'd like**
Change audio columns from string path to Audio feature.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3990/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3989/comments | https://api.github.com/repos/huggingface/datasets/issues/3989/events | https://github.com/huggingface/datasets/pull/3989 | 1,176,955,078 | PR_kwDODunzps400l1S | 3,989 | Remove old wikipedia leftovers | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This makes me think we shouldn't advise the use of load_dataset in dataset scripts, since it doesn't guarantee that the cache will work as expected (the cache directory is not set correctly, and the required disk space for downloaded files is not recorded)\r\n\r\n@lhoestq, do you think it could be a good idea to add a comment in this script WARNING that using load_dataset in a script is not good practice and that people should avoid using that script as a template to create other scripts? ",
"good idea ! :)"
] | 1,647,962,746,000 | 1,648,740,926,000 | 1,648,740,616,000 | MEMBER | null | After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3989",
"html_url": "https://github.com/huggingface/datasets/pull/3989",
"diff_url": "https://github.com/huggingface/datasets/pull/3989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3989.patch",
"merged_at": 1648740616000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3988/comments | https://api.github.com/repos/huggingface/datasets/issues/3988/events | https://github.com/huggingface/datasets/pull/3988 | 1,176,858,540 | PR_kwDODunzps400RGb | 3,988 | More consistent references in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks good, thanks for working on this!"
] | 1,647,958,721,000 | 1,647,968,792,000 | 1,647,967,844,000 | CONTRIBUTOR | null | Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980.
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3988/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3988/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3988",
"html_url": "https://github.com/huggingface/datasets/pull/3988",
"diff_url": "https://github.com/huggingface/datasets/pull/3988.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3988.patch",
"merged_at": 1647967843000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3987/comments | https://api.github.com/repos/huggingface/datasets/issues/3987/events | https://github.com/huggingface/datasets/pull/3987 | 1,176,481,659 | PR_kwDODunzps40zAxF | 3,987 | Fix Faiss custom_index device | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,940,284,000 | 1,648,124,339,000 | 1,648,124,052,000 | MEMBER | null | Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored.
This PR fixes this by raising a ValueError if both arguments are passed.
Alternatively, the `custom_index` could be transferred to the target `device`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3987",
"html_url": "https://github.com/huggingface/datasets/pull/3987",
"diff_url": "https://github.com/huggingface/datasets/pull/3987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3987.patch",
"merged_at": 1648124052000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3986/comments | https://api.github.com/repos/huggingface/datasets/issues/3986/events | https://github.com/huggingface/datasets/issues/3986 | 1,176,429,565 | I_kwDODunzps5GHuP9 | 3,986 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface) | {
"login": "kelvinAI",
"id": 10686779,
"node_id": "MDQ6VXNlcjEwNjg2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kelvinAI",
"html_url": "https://github.com/kelvinAI",
"followers_url": "https://api.github.com/users/kelvinAI/followers",
"following_url": "https://api.github.com/users/kelvinAI/following{/other_user}",
"gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions",
"organizations_url": "https://api.github.com/users/kelvinAI/orgs",
"repos_url": "https://api.github.com/users/kelvinAI/repos",
"events_url": "https://api.github.com/users/kelvinAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/kelvinAI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n"
] | 1,647,937,401,000 | 1,648,698,691,000 | null | NONE | null | ## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear and concise description of what the bug is.
Issue:
- Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory
- No error code, had to terminate the process
- There are some files created in the cache directory:
```
custom_cache_dir
| -- modules
| -- __init__.py
| -- datasets_modules
| -- __init__.py
| -- datasets
| -- __init__.py
| -- script.py (Dataset loading script)
| -- script.lock
```
There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk.
## Steps to reproduce the bug
What I've tried:
- Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703)
- Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html)
- Modifying cache_dir param during runtime
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache')
```
- Disabling dataset cache
```python
>>> from datasets import set_caching_enabled
>>> set_caching_enabled(False)
```
## Expected results
Datasets should load / cache as usual with the only exception that cache directory is different
## Actual results
Any actions taken above to change the cache directory results in loading indefinitely without terminating.
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3986/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3985/comments | https://api.github.com/repos/huggingface/datasets/issues/3985/events | https://github.com/huggingface/datasets/issues/3985 | 1,175,982,937 | I_kwDODunzps5GGBNZ | 3,985 | [image feature] Too many files open error when image feature is returned as a path | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,647,899,645,000 | 1,648,059,567,000 | 1,648,059,567,000 | MEMBER | null | ## Describe the bug
PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue.
## Steps to reproduce the bug
Pull the PR locally and run the following code
```python
from datasets import load_dataset
dataset = load_dataset("./datasets/textvqa")["train"]
data = [item for item in dataset]
# Error happens
```
## Expected results
List comprehension should work smoothly
## Actual results
`Too many open files error`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.10.0
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3985/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3984/comments | https://api.github.com/repos/huggingface/datasets/issues/3984/events | https://github.com/huggingface/datasets/issues/3984 | 1,175,822,117 | I_kwDODunzps5GFZ8l | 3,984 | Local and automatic tests fail | {
"login": "MarkusSagen",
"id": 20767068,
"node_id": "MDQ6VXNlcjIwNzY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkusSagen",
"html_url": "https://github.com/MarkusSagen",
"followers_url": "https://api.github.com/users/MarkusSagen/followers",
"following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions",
"organizations_url": "https://api.github.com/users/MarkusSagen/orgs",
"repos_url": "https://api.github.com/users/MarkusSagen/repos",
"events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkusSagen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with\r\n```\r\npip install -e .[tests]\r\npip install -r additional-tests-requirements.txt --no-deps\r\n```\r\n\r\nIn particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacrebleu.TER` properly."
] | 1,647,889,657,000 | 1,648,473,525,000 | null | NONE | null | ## Describe the bug
Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py`
## Steps to reproduce the bug
```shell
git clone https://huggingface/datasets.git
cd datasets
```
```python
python -m pip install -e .
pytest
```
## Expected results
All tests passing
## Actual results
```
tests/test_metric_common.py:91:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run
exec(compile(example.source, filename, "single",
<doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module>
???
../datasets/src/datasets/metric.py:430: in compute
output = self._compute(**inputs, **compute_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references)
>>> print(results)
{'score': 0.0, 'num_edits': 0, 'ref_length': 6.5}
""", stored examples: 0)
predictions = ['hello there general kenobi', 'foo bar foobar']
references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']]
normalized = False, no_punct = False, asian_support = False, case_sensitive = False
def _compute(
self,
predictions,
references,
normalized: bool = False,
no_punct: bool = False,
asian_support: bool = False,
case_sensitive: bool = False,
):
references_per_prediction = len(references[0])
if any(len(refs) != references_per_prediction for refs in references):
raise ValueError("Sacrebleu requires the same number of references for each prediction")
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
> sb_ter = TER(normalized, no_punct, asian_support, case_sensitive)
E TypeError: __init__() takes 2 positional arguments but 5 were given
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError
------------------------------ Captured stdout call -------------------------------
Trying:
predictions = ["hello there general kenobi", "foo bar foobar"]
Expecting nothing
ok
Trying:
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
Expecting nothing
ok
Trying:
ter = datasets.load_metric("ter")
Expecting nothing
ok
Trying:
results = ter.compute(predictions=predictions, references=references)
Expecting nothing
================================ warnings summary =================================
../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import load_source
../datasets/src/datasets/commands/test.py:35
/home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py)
class TestCommand(BaseDatasetsCLICommand):
tests/commands/test_test.py:33
/home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py)
class TestCommandArgs:
tests/test_arrow_dataset.py: 760 warnings
tests/test_formatting.py: 60 warnings
tests/test_search.py: 31 warnings
tests/features/test_array_xd.py: 117 warnings
/home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
(isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
tests/test_arrow_dataset.py: 154 warnings
tests/features/test_array_xd.py: 1 warning
/home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
tests/test_arrow_dataset.py: 60 warnings
/home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
elif np.issubdtype(values.dtype, np.str):
tests/test_arrow_dataset.py: 138 warnings
tests/test_formatting.py: 21 warnings
/home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
data_struct.dtype == np.object
tests/test_arrow_dataset.py: 240 warnings
tests/test_formatting.py: 20 warnings
/home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
tests/test_arrow_dataset.py: 12 warnings
tests/test_search.py: 2 warnings
tests/features/test_array_xd.py: 6 warnings
tests/features/test_image.py: 4 warnings
/home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
[0] + [len(arr) for arr in l_arr], dtype=np.object
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~
_CITATION = """\
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \=
_CITATION = """\
tests/test_filesystem.py: 105 warnings
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly
warn(
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
/home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
lax._check_user_dtype_supported(dtype, "array")
tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
if obj.zone == 'local':
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features
_audio
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
dtype=np.complex,
tests/features/test_array_xd.py::test_array_xd_with_none
/home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================= short test summary info =============================
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type...
```
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33
- Python version: 3.8.5
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3984/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3983/comments | https://api.github.com/repos/huggingface/datasets/issues/3983/events | https://github.com/huggingface/datasets/issues/3983 | 1,175,759,412 | I_kwDODunzps5GFKo0 | 3,983 | Infinitely attempting lock | {
"login": "jyrr",
"id": 11869652,
"node_id": "MDQ6VXNlcjExODY5NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jyrr",
"html_url": "https://github.com/jyrr",
"followers_url": "https://api.github.com/users/jyrr/followers",
"following_url": "https://api.github.com/users/jyrr/following{/other_user}",
"gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jyrr/subscriptions",
"organizations_url": "https://api.github.com/users/jyrr/orgs",
"repos_url": "https://api.github.com/users/jyrr/repos",
"events_url": "https://api.github.com/users/jyrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jyrr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of `py-filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `py-filelock` documentation that you can try:\r\n```python\r\nfrom filelock import Timeout, FileLock\r\n\r\nlock = FileLock(\"high_ground.txt.lock\")\r\nwith lock:\r\n with open(\"high_ground.txt\", \"a\") as f:\r\n f.write(\"You were the chosen one.\")\r\n```"
] | 1,647,886,317,000 | 1,648,473,177,000 | null | NONE | null | I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`.
Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS).
```
%sh
python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /dbfs/transformers/tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--log_level debug \
--cache_dir /dbfs/transformers/cache
```
All goes well until acquiring a lock --
```
03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
```
and so on.
I imagine this has to do with DBFS -- is there a way to tackle this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3983/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3982/comments | https://api.github.com/repos/huggingface/datasets/issues/3982/events | https://github.com/huggingface/datasets/pull/3982 | 1,175,478,099 | PR_kwDODunzps40vrR_ | 3,982 | Exclude Google Drive tests of the CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I was thinking exactly the same: running unit tests that request continuously a third-party API is not a good idea."
] | 1,647,873,256,000 | 1,648,744,682,000 | 1,647,874,295,000 | MEMBER | null | These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often.
I think we can just skip these tests from the CI for now.
In the future we could have a CI job that runs only once a day or once a week for such cases
cc @albertvillanova @mariosasko @severo
Close #3415
![image](https://user-images.githubusercontent.com/42851186/159283608-fdeca1ac-b57f-4fa3-bf09-6fa5361c494f.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3982/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3982",
"html_url": "https://github.com/huggingface/datasets/pull/3982",
"diff_url": "https://github.com/huggingface/datasets/pull/3982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3982.patch",
"merged_at": 1647874295000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3981/comments | https://api.github.com/repos/huggingface/datasets/issues/3981/events | https://github.com/huggingface/datasets/pull/3981 | 1,175,423,517 | PR_kwDODunzps40vfra | 3,981 | Add TER metric card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,870,876,000 | 1,648,562,231,000 | 1,648,561,900,000 | CONTRIBUTOR | null | Add TER metric card
This card is still missing content for the following sections:
- **Limitations & Biases**
- **Values from Papers**
If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3981",
"html_url": "https://github.com/huggingface/datasets/pull/3981",
"diff_url": "https://github.com/huggingface/datasets/pull/3981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3981.patch",
"merged_at": 1648561900000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3980/comments | https://api.github.com/repos/huggingface/datasets/issues/3980/events | https://github.com/huggingface/datasets/pull/3980 | 1,175,412,905 | PR_kwDODunzps40vdcH | 3,980 | Add tip on how to speed up loading with ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding that tip! π \r\n\r\nFor the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,`cast_column`) instead of the full path which can be a bit lengthy for some functions like `datasets.IterableDataset.remove_columns` (and if we like this idea, we can align the rest of the docs on it). ",
"> For the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,cast_column) instead of the full path which can be a bit lengthy for some functions like datasets.IterableDataset.remove_columns (and if we like this idea, we can align the rest of the docs on it).\r\n\r\nThat's also OK, as long as we are consistent.\r\n\r\n@lhoestq @albertvillanova @polinaeterna Which one of these two styles do you prefer?",
"Agree on hiding `datasets` name. Not sure about hiding class name as it's anyway not visible for users if they use `Dataset.cast_column` or `IterableDataset.cast_column` when working with their datasets. But I agree that the most important thing is to be consistent :)",
"Good points! :)\r\n\r\nI think it'll be good to show the class name since some functions have different parameters. For example, if users click on `IterableDataset.map` and then `Dataset.map`, they'll see different parameters and have to figure out why (which isn't too difficult I guess lol). But showing the class name avoids any confusion upfront. "
] | 1,647,870,358,000 | 1,647,956,385,000 | 1,647,956,096,000 | CONTRIBUTOR | null | This PR does two things:
* adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960))
* replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (to align it with the Audio Processing doc)
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3980",
"html_url": "https://github.com/huggingface/datasets/pull/3980",
"diff_url": "https://github.com/huggingface/datasets/pull/3980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3980.patch",
"merged_at": 1647956096000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3979/comments | https://api.github.com/repos/huggingface/datasets/issues/3979/events | https://github.com/huggingface/datasets/pull/3979 | 1,175,258,969 | PR_kwDODunzps40u8NY | 3,979 | Fix google drive streaming for small files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually the CI fails because of this\r\n![image](https://user-images.githubusercontent.com/42851186/159281771-78e611b1-6b04-4a87-8324-b6ba2d8c6a6a.png)\r\n\r\nIt looks like we can't have a proper way to test google drive in the CI right now. Though it seems to work locally if you're not banned. I think I'll just disable those tests for now",
"this fix will not be included?",
"No we can't do anything except stop using google drive when possible"
] | 1,647,862,726,000 | 1,648,141,151,000 | 1,647,872,758,000 | MEMBER | null | Google drive did another change recently, following #3787 #3843 .
In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3979/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3979",
"html_url": "https://github.com/huggingface/datasets/pull/3979",
"diff_url": "https://github.com/huggingface/datasets/pull/3979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3979.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3978/comments | https://api.github.com/repos/huggingface/datasets/issues/3978/events | https://github.com/huggingface/datasets/issues/3978 | 1,175,226,456 | I_kwDODunzps5GDIhY | 3,978 | I can't view HFcallback dataset for ASR Space | {
"login": "kingabzpro",
"id": 36753484,
"node_id": "MDQ6VXNlcjM2NzUzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingabzpro",
"html_url": "https://github.com/kingabzpro",
"followers_url": "https://api.github.com/users/kingabzpro/followers",
"following_url": "https://api.github.com/users/kingabzpro/following{/other_user}",
"gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions",
"organizations_url": "https://api.github.com/users/kingabzpro/orgs",
"repos_url": "https://api.github.com/users/kingabzpro/repos",
"events_url": "https://api.github.com/users/kingabzpro/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingabzpro/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture dβeΜcran 2022-03-24 aΜ 17 36 20\" src=\"https://user-images.githubusercontent.com/1676121/159966006-57dcf8f7-b65f-4200-ac8c-66859318a8bb.png\">\r\n",
"The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.\r\n\r\nWe're working on supporting audio datasets with a specific structure in #3963 ",
"Got it."
] | 1,647,860,869,000 | 1,649,079,278,000 | null | NONE | null | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3978/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3977/comments | https://api.github.com/repos/huggingface/datasets/issues/3977/events | https://github.com/huggingface/datasets/issues/3977 | 1,175,049,927 | I_kwDODunzps5GCdbH | 3,977 | Adapt `docs/README.md` for datasets | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. "
] | 1,647,851,209,000 | 1,647,852,855,000 | null | CONTRIBUTOR | null | ## Describe the bug
Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3977/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3977/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3976/comments | https://api.github.com/repos/huggingface/datasets/issues/3976/events | https://github.com/huggingface/datasets/pull/3976 | 1,175,043,780 | PR_kwDODunzps40uOY6 | 3,976 | Fix main classes reference in docs | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3976). All of your documentation changes will be reflected on that endpoint.",
"Not sure why some section titles end with `[[datasets.xxx]]`, like this: https://huggingface.co/docs/datasets/pr_3976/en/package_reference/main_classes#datasetdict[[datasets.datasetdict]]"
] | 1,647,850,786,000 | 1,647,853,892,000 | null | CONTRIBUTOR | null | Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block.
There are other examples in datasets library having this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3976",
"html_url": "https://github.com/huggingface/datasets/pull/3976",
"diff_url": "https://github.com/huggingface/datasets/pull/3976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3976.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3975/comments | https://api.github.com/repos/huggingface/datasets/issues/3975/events | https://github.com/huggingface/datasets/pull/3975 | 1,174,678,942 | PR_kwDODunzps40tKdS | 3,975 | Update many missing tags to dataset README's | {
"login": "MarkusSagen",
"id": 20767068,
"node_id": "MDQ6VXNlcjIwNzY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkusSagen",
"html_url": "https://github.com/MarkusSagen",
"followers_url": "https://api.github.com/users/MarkusSagen/followers",
"following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions",
"organizations_url": "https://api.github.com/users/MarkusSagen/orgs",
"repos_url": "https://api.github.com/users/MarkusSagen/repos",
"events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkusSagen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,808,947,000 | 1,647,887,992,000 | 1,647,887,992,000 | NONE | null | I've started to go through the datasets available and noticed that there are 127 datasets that does not have all the tags so I started filling them in; starting with some of the most common and QA datasets
Not 100% certain that the task_id is correct for SuperGLUE
If anyone is browsing the issues and would like to help make Hugging face datasets even more feature complete and awesome, feel free to use this tool I wrote to find the missing tags in the [datacards](https://github.com/Hugging-Face-Supporter/datacards) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3975",
"html_url": "https://github.com/huggingface/datasets/pull/3975",
"diff_url": "https://github.com/huggingface/datasets/pull/3975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3975.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3974/comments | https://api.github.com/repos/huggingface/datasets/issues/3974/events | https://github.com/huggingface/datasets/pull/3974 | 1,174,485,044 | PR_kwDODunzps40ssrA | 3,974 | Add XFUN dataset | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3974). All of your documentation changes will be reflected on that endpoint.",
"Not sure how to generate dummy data.\r\n\r\nThe downloaded file structure is \r\n\r\n- document file paths\r\n - (a json file containing all documents info, document images folder)\r\n - (a json file containing all documents info, document images folder)\r\n - ...",
"Hey @mariosasko, thanks for the review. I'm not sure how to suggest these changes to the owner @ranpox, and I did spend some time to write the model card and hope to get it on the official repo. Is that possible?"
] | 1,647,768,294,000 | 1,648,967,421,000 | null | CONTRIBUTOR | null | This PR adds XFUN dataset.
Home page and repository: https://github.com/doc-analysis/XFUND
Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3974/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3974",
"html_url": "https://github.com/huggingface/datasets/pull/3974",
"diff_url": "https://github.com/huggingface/datasets/pull/3974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3974.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3973/comments | https://api.github.com/repos/huggingface/datasets/issues/3973/events | https://github.com/huggingface/datasets/issues/3973 | 1,174,455,431 | I_kwDODunzps5GAMSH | 3,973 | ConnectionError and SSLError | {
"login": "yanyu2015",
"id": 11142054,
"node_id": "MDQ6VXNlcjExMTQyMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11142054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanyu2015",
"html_url": "https://github.com/yanyu2015",
"followers_url": "https://api.github.com/users/yanyu2015/followers",
"following_url": "https://api.github.com/users/yanyu2015/following{/other_user}",
"gists_url": "https://api.github.com/users/yanyu2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanyu2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanyu2015/subscriptions",
"organizations_url": "https://api.github.com/users/yanyu2015/orgs",
"repos_url": "https://api.github.com/users/yanyu2015/repos",
"events_url": "https://api.github.com/users/yanyu2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanyu2015/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```",
"it works,but another error occurs.\r\n```\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError(\"HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))\")))\r\n```\r\nI can access `https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt` and `https://aws.amazon.com/cn/s3/` directly, so why it reports a SSLError, should I need tomodify the host fileοΌ",
"Could it be an issue with your python environment or your version of OpenSSL ?",
"you are so wise!\r\nit report [ConnectionError] in python 3.9.7\r\nand works well in python 3.8.12\r\n\r\nI need you help again: how can I specify the path for download files?\r\nthe data is too large and my C hardware is not enough",
"Cool ! And you can specify the path for download files with to the `cache_dir` parameter:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory')",
"It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.\r\nparameter `cache_dir` works well, thanks for your kindness again!"
] | 1,647,758,737,000 | 1,648,628,012,000 | 1,648,628,012,000 | NONE | null | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module>
----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1658
1659 # Create a dataset builder
-> 1660 builder_instance = load_dataset_builder(
1661 path=path,
1662 name=name,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1484 download_config = download_config.copy() if download_config else DownloadConfig()
1485 download_config.use_auth_token = use_auth_token
-> 1486 dataset_module = dataset_module_factory(
1487 path,
1488 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1237 ) from None
-> 1238 raise e1 from None
1239 else:
1240 raise FileNotFoundError(
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now
1174 # TODO(QL): use a Hub dataset module factory instead of GitHub
-> 1175 return GithubDatasetModuleFactory(
1176 path,
1177 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self)
531 revision = self.revision
532 try:
--> 533 local_path = self.download_loading_script(revision)
534 except FileNotFoundError:
535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision)
511 if download_config.download_desc is None:
512 download_config.download_desc = "Downloading builder script"
--> 513 return cached_path(file_path, download_config=download_config)
514
515 def download_dataset_infos_file(self, revision: Optional[str]) -> str:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
232 if is_remote_url(url_or_filename):
233 # URL, so get it from the cache (downloading if necessary)
--> 234 output_path = get_from_cache(
235 url_or_filename,
236 cache_dir=cache_dir,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
581 if head_error is not None:
--> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
583 elif response is not None:
584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))")))
```
It may be caused by Caused by SSLError(in China?) because it works well on google colab.
So how can I download this dataset manually?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3973/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3972/comments | https://api.github.com/repos/huggingface/datasets/issues/3972/events | https://github.com/huggingface/datasets/pull/3972 | 1,174,402,033 | PR_kwDODunzps40sdVu | 3,972 | Adding Roman Urdu Hate Speech dataset | {
"login": "bp-high",
"id": 53102161,
"node_id": "MDQ6VXNlcjUzMTAyMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/53102161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bp-high",
"html_url": "https://github.com/bp-high",
"followers_url": "https://api.github.com/users/bp-high/followers",
"following_url": "https://api.github.com/users/bp-high/following{/other_user}",
"gists_url": "https://api.github.com/users/bp-high/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bp-high/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bp-high/subscriptions",
"organizations_url": "https://api.github.com/users/bp-high/orgs",
"repos_url": "https://api.github.com/users/bp-high/repos",
"events_url": "https://api.github.com/users/bp-high/events{/privacy}",
"received_events_url": "https://api.github.com/users/bp-high/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq can you review when you have some time? Also were the previous CI fails due to the Google Drive tests which were excluded by #3982 ?",
"> were the previous CI fails due to the Google Drive tests which were excluded by https://github.com/huggingface/datasets/pull/3982 ?\r\n\r\nYes exactly, merging `master` into your branch fixed the CI ;)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,735,566,000 | 1,648,223,779,000 | 1,648,223,480,000 | CONTRIBUTOR | null | This Pull request will add the Roman Urdu Hate speech Dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3972/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3972",
"html_url": "https://github.com/huggingface/datasets/pull/3972",
"diff_url": "https://github.com/huggingface/datasets/pull/3972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3972.patch",
"merged_at": 1648223480000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3971/comments | https://api.github.com/repos/huggingface/datasets/issues/3971/events | https://github.com/huggingface/datasets/pull/3971 | 1,174,329,442 | PR_kwDODunzps40sS4W | 3,971 | Applied index-filters on scores in search.py. | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,647,715,422,000 | 1,648,010,865,000 | null | NONE | null | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3971",
"html_url": "https://github.com/huggingface/datasets/pull/3971",
"diff_url": "https://github.com/huggingface/datasets/pull/3971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3971.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3970/comments | https://api.github.com/repos/huggingface/datasets/issues/3970/events | https://github.com/huggingface/datasets/pull/3970 | 1,174,327,367 | PR_kwDODunzps40sSfx | 3,970 | Apply index-filters on scores in get_nearest_examples and get_nearest⦠| {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,714,751,000 | 1,647,715,092,000 | 1,647,715,092,000 | NONE | null | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3970",
"html_url": "https://github.com/huggingface/datasets/pull/3970",
"diff_url": "https://github.com/huggingface/datasets/pull/3970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3970.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3969/comments | https://api.github.com/repos/huggingface/datasets/issues/3969/events | https://github.com/huggingface/datasets/issues/3969 | 1,174,273,824 | I_kwDODunzps5F_f8g | 3,969 | Cannot preview cnn_dailymail dataset | {
"login": "hasan-besh",
"id": 75482871,
"node_id": "MDQ6VXNlcjc1NDgyODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/75482871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasan-besh",
"html_url": "https://github.com/hasan-besh",
"followers_url": "https://api.github.com/users/hasan-besh/followers",
"following_url": "https://api.github.com/users/hasan-besh/following{/other_user}",
"gists_url": "https://api.github.com/users/hasan-besh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasan-besh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasan-besh/subscriptions",
"organizations_url": "https://api.github.com/users/hasan-besh/orgs",
"repos_url": "https://api.github.com/users/hasan-besh/repos",
"events_url": "https://api.github.com/users/hasan-besh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasan-besh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ",
"Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` doesn't help. But yes indeed the cache must be refreshed.\r\n\r\nThe CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.\r\n\r\nAt Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK",
"Yes @lhoestq, I didn't explain myself well: my previous message was addressed to @severo. ",
"I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive",
"Sounds good. I was looking for another host of this dataset but couldn't find any (yet)"
] | 1,647,698,937,000 | 1,648,649,615,000 | null | NONE | null | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3969/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3968/comments | https://api.github.com/repos/huggingface/datasets/issues/3968/events | https://github.com/huggingface/datasets/issues/3968 | 1,174,193,962 | I_kwDODunzps5F_Mcq | 3,968 | Cannot preview 'indonesian-nlp/eli5_id' dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cahya-wirawan, thanks for reporting.\r\n\r\nYour dataset is working OK in streaming mode:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"indonesian-nlp/eli5_id\", split=\"train\", streaming=True)\r\n ...: item = next(iter(ds))\r\n ...: item\r\nUsing custom data configuration indonesian-nlp--eli5_id-9fe728a7e760fb7b\r\n\r\nOut[1]: \r\n{'q_id': '1oy5tc',\r\n 'title': 'dalam sepak bola apa gunanya menyia-nyiakan dua permainan pertama dengan terburu-buru - di tengah - bukan permainan terburu-buru biasa saya mendapatkannya',\r\n 'selftext': '',\r\n 'document': '',\r\n 'subreddit': 'explainlikeimfive',\r\n 'answers': {'a_id': ['ccwtgnz', 'ccwtmho', 'ccwt946', 'ccwvj0u'],\r\n 'text': ['Jaga pertahanan tetap jujur, rasakan operan terburu-buru, buka permainan yang lewat. Pelanggaran yang terlalu satu dimensi akan gagal. Dan mereka yang bergegas ke tengah kadang-kadang dapat dibuka lebar-lebar untuk ukuran yard yang besar.',\r\n 'Jika Anda melempar bola sepanjang waktu, maka pertahanan akan beradaptasi untuk selalu menutupi umpan. Dengan melakukan permainan lari sederhana sesekali, Anda memaksa pertahanan untuk tetap dekat dan menjaga dari lari. Terkadang, pelanggaran dapat membuat pertahanan lengah dengan berpura-pura berlari dan membebaskan penerima mereka. Selain itu, Anda tidak perlu mendapatkan yard besar di setiap permainan. Terkadang, paling baik mendapatkan beberapa yard sekaligus. Selama Anda mendapatkan yang pertama, Anda dalam kondisi yang baik.',\r\n 'Dalam kebanyakan kasus, O-Line seharusnya membuat lubang untuk dilalui kembali. Jika Anda menjalankan terlalu banyak permainan ke luar / melempar, pertahanan akan mengejar. Juga, 2 permainan 5 yard memberi Anda satu set down baru.',\r\n 'Saya Anda tidak suka jenis drama itu, tonton CFL. Kami hanya mendapatkan 3 down sehingga Anda tidak bisa menyia-nyiakannya. Lebih banyak lagi yang lewat.'],\r\n 'score': [3, 2, 2, 2]},\r\n 'title_urls': {'url': []},\r\n 'selftext_urls': {'url': []},\r\n 'answers_urls': {'url': []}}\r\n```\r\nTherefore, it should be properly rendered in the previewer. Let me ping @severo to have a look at it.",
"Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work?",
"Yes, preview is not supported on private datasets yet. We are working on that though...",
"Thanks for the confirmation ",
"Fixed. Thanks for your feedback."
] | 1,647,672,849,000 | 1,648,139,664,000 | 1,648,139,664,000 | CONTRIBUTOR | null | ## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3968/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3967/comments | https://api.github.com/repos/huggingface/datasets/issues/3967/events | https://github.com/huggingface/datasets/pull/3967 | 1,174,107,128 | PR_kwDODunzps40rpny | 3,967 | [feat] Add TextVQA dataset | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3967). All of your documentation changes will be reflected on that endpoint."
] | 1,647,646,179,000 | 1,648,649,233,000 | null | MEMBER | null | This would be the first classification-based vision-and-language dataset in the datasets library.
Currently, the dataset downloads everything you need beforehand. See the [paper](https://arxiv.org/abs/1904.08920) for more details.
Test Plan:
- Ran the full and the dummy data test locally | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3967",
"html_url": "https://github.com/huggingface/datasets/pull/3967",
"diff_url": "https://github.com/huggingface/datasets/pull/3967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3967.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3966/comments | https://api.github.com/repos/huggingface/datasets/issues/3966/events | https://github.com/huggingface/datasets/pull/3966 | 1,173,883,084 | PR_kwDODunzps40rBNE | 3,966 | Create metric card for BERTScore | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,627,716,000 | 1,647,956,128,000 | 1,647,955,856,000 | CONTRIBUTOR | null | Proposing a metric card for BERTScore | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3966",
"html_url": "https://github.com/huggingface/datasets/pull/3966",
"diff_url": "https://github.com/huggingface/datasets/pull/3966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3966.patch",
"merged_at": 1647955856000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3965/comments | https://api.github.com/repos/huggingface/datasets/issues/3965/events | https://github.com/huggingface/datasets/issues/3965 | 1,173,708,739 | I_kwDODunzps5F9V_D | 3,965 | TypeError: Couldn't cast array of type for JSONLines dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,647,616,673,000 | 1,647,624,398,000 | null | MEMBER | null | ## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_url
import pandas as pd
# returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl'
data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset")
# throws TypeError: Couldn't cast array of type
dset = load_dataset("json", data_files=data_files, split="test")
# no problem with pandas - note this take a while as the file is >2GB
df = pd.read_json(data_files, orient="records", lines=True)
df.head()
```
## Expected results
I can load any line-separated JSON file, similar to pandas.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split
writer.write_table(table)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast
return cast_table_to_features(table, Features.from_arrow_schema(schema))
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]>
to
null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3965/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3965/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3964/comments | https://api.github.com/repos/huggingface/datasets/issues/3964/events | https://github.com/huggingface/datasets/issues/3964 | 1,173,564,993 | I_kwDODunzps5F8y5B | 3,964 | Add default Audio Loader | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,647,608,335,000 | 1,647,610,379,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
Writing a custom loading dataset script might be a bit challenging for users.
**Describe the solution you'd like**
Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure.
**Describe alternatives you've considered**
Create a custom loading script? that's what users doing now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3964/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3963/comments | https://api.github.com/repos/huggingface/datasets/issues/3963/events | https://github.com/huggingface/datasets/pull/3963 | 1,173,492,562 | PR_kwDODunzps40puyZ | 3,963 | Add Audio Folder | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3963). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge `master` into this branch to fix the CI errors related to Google Drive :)\r\n\r\nI think we can just remove the test that is based on dummy data, or make it have the `sampling_rate` parameter hardcoded in the test",
"IMO it's important to keep this loader aligned with `imagefolder`. I'm aware that the current `imagefolder` API is limiting because only labels can be inferred from the directory structure, which means it can only be used for classification and self-supervised pretraining. However, to make the loader more generic, we plan to support [metadata files](https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) (will work on that this week), and in the audio case, these files can store transcripts.\r\n\r\nStreaming TAR archives (`iter_archive`) is not supported by any of the loaders currently, so we can add that in a separate PR for all of them (to keep this PR simple).\r\n\r\nWDYT?",
"> Streaming TAR archives (iter_archive) is not supported by any of the loaders currently, so we can add that in a separate PR for all of them (to keep this PR simple).\r\n\r\nYes definitely, we can see that later\r\n\r\n> to make the loader more generic, we plan to support [metadata files](https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) (will work on that this week), and in the audio case, these files can store transcripts.\r\n\r\nCould you share an example of what the structure would look like in this case ?\r\n\r\nNote that for audio we ultimately should be able to load several splits at once (common voice, librispeech, etc. all have splits), unlike the current imagefolder implementation that puts everything in `train` (EDIT: I mean, when we pass `data_dir`). If we want consistency then we would need the same for imagefolder.",
"> I think we can just remove the test that is based on dummy data, or make it have the sampling_rate parameter hardcoded in the test\r\n\r\nNot sure what to do with `test_builder_class` and `test_load_dataset_offline`, I don't really want to drop these tests completely but do you think it's a good idea to hardcode builder loading like this: π€\r\n```\r\nif dataset_name == \"audiofolder\":\r\n builder = builder_cls(name=name, cache_dir=tmp_cache_dir, sampling_rate=16_000)\r\nelse:\r\n builder = builder_cls(name=name, cache_dir=tmp_cache_dir)\r\n```\r\n@mariosasko totally agree on that APIs should be aligned, do you think we should implement metadata support first? Or maybe we can merge this PR with explicit single transcript file and add full metadata support further.\r\n\r\nSplits support is definitely a required feature too, I think we can implement it in the future PR too. \r\n",
"btw i've found a workaround for splits generation :D\r\n\r\n```\r\nfrom datasets.data_files import DataFilesDict\r\n\r\nds = load_dataset(\r\n \"audiofolder\",\r\n data_files=DataFilesDict(\r\n {\r\n \"train\":\"../audiofolder/AudioTestSplits/train.zip\",\r\n \"test\": \"../audiofolder/AudioTestSplits/test.zip\"\r\n }\r\n ),\r\n sampling_rate=16_000\r\n)\r\n```",
"> Not sure what to do with test_builder_class and test_load_dataset_offline, I don't really want to drop these tests completely but do you think it's a good idea to hardcode builder loading like this: π€\r\n\r\nYes it's fine. If you you're not a fan of having such parameters directly at the core of the code you can declare a global variable `PACKAGED_MODULES_TEST_KWARGS = {\"audiofolder\": {\"sampling_rate\": 16_000}}` and do\r\n```python\r\nbuilder_kwargs = PACKAGED_MODULES_TEST_KWARGS.get(name, {})\r\nbuilder = builder_cls(name=name, cache_dir=tmp_cache_dir, **builder_kwargs)\r\n```\r\n\r\n> btw i've found a workaround for splits generation :D\r\n\r\nYes that works :) Note that you don't have to use `DataFilesDict` and you can pass a python dict directly (`DataFilesDict` is for internal usage only)",
"@lhoestq @mariosasko please take a look at the code and feel free to add your comments and discuss the potential issues\r\n \r\nafter we are satisfied with the code, I'll write the documentation ",
"@lhoestq it appeared that this PR already exists... https://github.com/huggingface/datasets/pull/3364",
"> The current problem with this loader is that it supports the ASR task by default, which could be surprising for the users thinking that this is the Image Folder counterpart for audio. To avoid this, we should support the audio classification task by default instead (we can add a template for it in this PR), where the label column is inferred from the directory structure.\r\n\r\nRight indeed, good catch. It's better to keep polishing the API rather than pushing fast something that can be confusing for users. Let's go for maximum alignment between the two then @polinaeterna ?",
"@mariosasko sorry, I didn't understand from your previous message that by aligning with the ImageFolder you mean inferring labels from directories names. Sure, that's not a problem, I can add the corresponding code. Do you also mean that in this version we should get rid of transcription file and feature and add it in the future when the metadata support https://github.com/huggingface/datasets/pull/4069 will be merged? \r\nMy understanding was that support for ASR task is more crucial than audio classification as it's more \"common\", but I would ask @anton-l and @patrickvonplaten about this. Anyway, it's not a problem to implement the classification task first, and the ASR one later. ",
"> Do you also mean that in this version we should get rid of transcription file and feature and add it in the future when the metadata support https://github.com/huggingface/datasets/pull/4069 will be merged?\r\n\r\nWe can wait for the linked PR to be merged first and then add the changes to this PR to have support for ASR from the get-go.",
"Don't follow 100% here, but as @polinaeterna said I think ASR is much more common than audio classification. Also, do you guys think a lot of users will use both the audio and image folder functionality ? Is it very important to have audio and image aligned here? Note that in Transformers while all models follow a common API, audio and vision models can be very different with respect to pre- and post-processing",
"> I think ASR is much more common than audio classification\r\n\r\nI agree, the main focus is ASR\r\n\r\n> do you guys think a lot of users will use both the audio and image folder functionality ?\r\n\r\nYup I think so, people don't just use public academic datasets right ? `imagefolder` is almost used 1k times a week, and it's just the beginning.\r\n\r\n> Is it very important to have audio and image aligned here?\r\n\r\nIf we can get some consistency for free, let's take it ^^ This way it will be easy for users to go from one modality to another, and documentation will be simpler.\r\n\r\n> Note that in Transformers while all models follow a common API, audio and vision models can be very different with respect to pre- and post-processing\r\n\r\nThat make total sense. Here this is mainly about raw data loading (before preprocessing) so we just need to make something generic, no matter what task the data is used for. Even though actually we know that ASR will be the main usage for now :p\r\n\r\nLet me know if it's clearer now or if you have other questions !"
] | 1,647,603,609,000 | 1,649,340,880,000 | null | CONTRIBUTOR | null | Would resolve #3964
AudioFolder loads a .txt file with transcriptions and creates a dataset with all audiofiles in provided directory that has a transcription (independently of the directory structure) as a single split (train).
Can be loaded via:
```python
# for local dirs
dataset = load_dataset("audiofolder", data_dir="/path/to/folder", transcripts_filename="transcripts.txt")
```
```python
# for local and remote zip archives
dataset = load_dataset("audiofolder", data_files="path/to/archive/archive.zip", transcripts_filename="transcripts.txt")
```
default transcriptions filename is `transcripts.txt`. it should have the following structure:
```
audio_id_1 transcription text 1
audio_id_1 transcription text 1
```
separator is `\t`!
---
sorry for first old commits from other branch, don't know how that happened... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3963/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3963/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3963",
"html_url": "https://github.com/huggingface/datasets/pull/3963",
"diff_url": "https://github.com/huggingface/datasets/pull/3963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3963.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3962/comments | https://api.github.com/repos/huggingface/datasets/issues/3962/events | https://github.com/huggingface/datasets/pull/3962 | 1,173,482,291 | PR_kwDODunzps40psq2 | 3,962 | Fix flatten of Sequence feature type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,602,862,000 | 1,647,873,647,000 | 1,647,873,372,000 | MEMBER | null | The `Sequence` features type is not correctly flattened if it contains a dictionary. This PR fixes this, and I added a test case for this.
Close https://github.com/huggingface/datasets/issues/3795 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3962",
"html_url": "https://github.com/huggingface/datasets/pull/3962",
"diff_url": "https://github.com/huggingface/datasets/pull/3962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3962.patch",
"merged_at": 1647873372000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3961/comments | https://api.github.com/repos/huggingface/datasets/issues/3961/events | https://github.com/huggingface/datasets/issues/3961 | 1,173,223,086 | I_kwDODunzps5F7fau | 3,961 | Scores from Index at extra positions are not filtered out | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi! Yes, that makes sense! Would you like to submit a PR to fix this?",
"Created PR https://github.com/huggingface/datasets/pull/3971"
] | 1,647,584,003,000 | 1,647,715,669,000 | null | NONE | null | If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too.
Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3961/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3960/comments | https://api.github.com/repos/huggingface/datasets/issues/3960/events | https://github.com/huggingface/datasets/issues/3960 | 1,173,148,884 | I_kwDODunzps5F7NTU | 3,960 | Load local dataset error | {
"login": "TXacs",
"id": 60869411,
"node_id": "MDQ6VXNlcjYwODY5NDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TXacs",
"html_url": "https://github.com/TXacs",
"followers_url": "https://api.github.com/users/TXacs/followers",
"following_url": "https://api.github.com/users/TXacs/following{/other_user}",
"gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TXacs/subscriptions",
"organizations_url": "https://api.github.com/users/TXacs/orgs",
"repos_url": "https://api.github.com/users/TXacs/repos",
"events_url": "https://api.github.com/users/TXacs/events{/privacy}",
"received_events_url": "https://api.github.com/users/TXacs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n```\r\n\r\n\r\nLet us know if that resolves the issue.",
"> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n> ```\r\n> \r\n> Let us know if that resolves the issue.\r\n\r\nSorry, replied late.\r\nThanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\nResolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 437283.97it/s]\r\nResolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 89094.29it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nDownloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...\r\nDownloading data files #0: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82289.56obj/s]\r\nDownloading data files #1: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 73559.11obj/s]\r\nDownloading data files #2: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81600.46obj/s]\r\nDownloading data files #3: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79691.56obj/s]\r\nDownloading data files #4: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82341.37obj/s]\r\nDownloading data files #5: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 75784.46obj/s]\r\nDownloading data files #6: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81466.18obj/s]\r\nDownloading data files #7: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82320.27obj/s]\r\nDownloading data files #8: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 78094.00obj/s]\r\nDownloading data files #9: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84057.59obj/s]\r\nDownloading data files #10: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 83082.31obj/s]\r\nDownloading data files #11: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79944.21obj/s]\r\nDownloading data files #12: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84569.77obj/s]\r\nDownloading data files #13: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84949.63obj/s]\r\nDownloading data files #14: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 80666.53obj/s]\r\nDownloading data files #15: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80072/80072 [00:01<00:00, 76723.20obj/s]\r\n^[[Bloading data files #8: 94%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 75061/80073 [00:00<00:00, 82609.89obj/s]\r\nDownloading data files #9: 85%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 68120/80073 [00:00<00:00, 83868.54obj/s]\r\nDownloading data files #9: 96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 76784/80073 [00:00<00:00, 84722.34obj/s]\r\nDownloading data files #10: 75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 59995/80073 [00:00<00:00, 84148.19obj/s]\r\nDownloading data files #10: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77412/80073 [00:00<00:00, 85724.53obj/s]\r\nDownloading data files #11: 71%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57032/80073 [00:00<00:00, 79930.58obj/s]\r\nDownloading data files #11: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 73277/80073 [00:00<00:00, 78091.27obj/s]\r\nDownloading data files #12: 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 69125/80073 [00:00<00:00, 84723.02obj/s]\r\nDownloading data files #12: 97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77803/80073 [00:00<00:00, 85351.59obj/s]\r\nDownloading data files #13: 75%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 60356/80073 [00:00<00:00, 84833.35obj/s]\r\nDownloading data files #13: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77368/80073 [00:00<00:00, 84475.10obj/s]\r\nDownloading data files #14: 72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57751/80073 [00:00<00:00, 80727.33obj/s]\r\nDownloading data files #14: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 74022/80073 [00:00<00:00, 78703.16obj/s]\r\nDownloading data files #15: 78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 62724/80072 [00:00<00:00, 78387.33obj/s]\r\nDownloading data files #15: 99%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 78933/80072 [00:01<00:00, 79353.63obj/s]\r\n```",
"Wait a long time, it completed. I don't know why it's so slow...",
"You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nThanksοΌIt's worked well.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nI find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.\r\nFirst loading, it costs about 20 min in my servers.\r\n```\r\nreal\t19m23.023s\r\nuser\t21m18.360s\r\nsys\t7m59.080s\r\n```\r\n\r\nSecond reusing, it costs about 15 min in my servers.\r\n```\r\nreal\t15m20.735s\r\nuser\t12m22.979s\r\nsys\t5m46.960s\r\n```\r\n\r\nI think it's too much slow, is there other method to make it faster?",
"And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n```python\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n labels = torch.tensor([example[\"labels\"] for example in examples])\r\n return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n```\r\nHow to know the keys of example?",
"Loading the image files slowly, is it because the multiple processes load files at the same time?",
"Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs. \r\n\r\n> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> \r\n> ```python\r\n> def collate_fn(examples):\r\n> pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> ```\r\n> \r\n> How to know the keys of example?\r\n\r\nWhat do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\n",
"> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.\r\n> \r\n> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> > ```python\r\n> > def collate_fn(examples):\r\n> > pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> > labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> > return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > How to know the keys of example?\r\n> \r\n> What do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\nThanks for your reply!\r\n\r\n1. I did not record the second output, so I run it again. \r\n```\r\n(merak) txacs@master:/dat/txacs/test$ time python test.py \r\nResolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 469497.89it/s]\r\nResolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 70123.73it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nReusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)\r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.37s/it]\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 1281167\r\n })\r\n validation: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 50000\r\n })\r\n})\r\n\r\nreal\t10m10.413s\r\nuser\t9m33.195s\r\nsys\t2m47.528s\r\n```\r\nAlthough it cost less time than the last, but still slowly.\r\n\r\n2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'.",
"Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.",
"> Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.\r\n\r\nSounds good! The main position, which costs long time, is from program starting to `\"Resolving data files\"`. I hope you can solve it early, thanks!"
] | 1,647,574,369,000 | 1,648,691,974,000 | null | NONE | null | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3960/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3959/comments | https://api.github.com/repos/huggingface/datasets/issues/3959/events | https://github.com/huggingface/datasets/issues/3959 | 1,172,872,695 | I_kwDODunzps5F6J33 | 3,959 | Medium-sized dataset conversion from pandas causes a crash | {
"login": "Antymon",
"id": 641005,
"node_id": "MDQ6VXNlcjY0MTAwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/641005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Antymon",
"html_url": "https://github.com/Antymon",
"followers_url": "https://api.github.com/users/Antymon/followers",
"following_url": "https://api.github.com/users/Antymon/following{/other_user}",
"gists_url": "https://api.github.com/users/Antymon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Antymon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Antymon/subscriptions",
"organizations_url": "https://api.github.com/users/Antymon/orgs",
"repos_url": "https://api.github.com/users/Antymon/repos",
"events_url": "https://api.github.com/users/Antymon/events{/privacy}",
"received_events_url": "https://api.github.com/users/Antymon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ?"
] | 1,647,548,435,000 | 1,648,475,763,000 | null | NONE | null | Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas
table = InMemoryTable.from_pandas(
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas
return cls(pa.Table.from_pandas(*args, **kwargs))
File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)
```
## Steps to reproduce the bug
I have a dataset made from replicated single example mocking a dict representation of a publication.
I copy over this example 140k times and create a pandas frame.
I use 'Dataset.from_pandas' and boom
```python
# Sample code to reproduce the bug
import copy
import datasets
import pandas
# serialized dict is quite long to be realistic representation of a publication content
paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}")
d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100))
arrow=datasets.Dataset.from_pandas(d)
```
## Expected results
The dataset should be converted without error.
## Actual results
Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets==1.18.4 pandas==1.3.5
- Platform: macOS 11.6 or CentOS Linux 7 (Core)
- Python version: Python 3.9.7
- PyArrow version: pyarrow==3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3959/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3958/comments | https://api.github.com/repos/huggingface/datasets/issues/3958/events | https://github.com/huggingface/datasets/pull/3958 | 1,172,657,981 | PR_kwDODunzps40nQU2 | 3,958 | Update Wikipedia metadata | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3958). All of your documentation changes will be reflected on that endpoint.",
"Once this last PR validated, I can take care of the integration of all the wikipedia update branch into master, @lhoestq. "
] | 1,647,539,405,000 | 1,647,865,608,000 | 1,647,865,607,000 | MEMBER | null | This PR updates:
- dataset card
- metadata JSON | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3958/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3958",
"html_url": "https://github.com/huggingface/datasets/pull/3958",
"diff_url": "https://github.com/huggingface/datasets/pull/3958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3958.patch",
"merged_at": 1647865607000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3957/comments | https://api.github.com/repos/huggingface/datasets/issues/3957/events | https://github.com/huggingface/datasets/pull/3957 | 1,172,401,455 | PR_kwDODunzps40magW | 3,957 | Fix xtreme s metrics | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sorry for the commit history mess, but will be squashed anyways so should be fine",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,524,344,000 | 1,647,611,179,000 | 1,647,610,936,000 | MEMBER | null | We in fact do need BABEL in xtreme-s | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3957/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3957",
"html_url": "https://github.com/huggingface/datasets/pull/3957",
"diff_url": "https://github.com/huggingface/datasets/pull/3957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3957.patch",
"merged_at": 1647610936000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3956/comments | https://api.github.com/repos/huggingface/datasets/issues/3956/events | https://github.com/huggingface/datasets/issues/3956 | 1,172,272,327 | I_kwDODunzps5F33TH | 3,956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"login": "amirj",
"id": 1645137,
"node_id": "MDQ6VXNlcjE2NDUxMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amirj",
"html_url": "https://github.com/amirj",
"followers_url": "https://api.github.com/users/amirj/followers",
"following_url": "https://api.github.com/users/amirj/following{/other_user}",
"gists_url": "https://api.github.com/users/amirj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amirj/subscriptions",
"organizations_url": "https://api.github.com/users/amirj/orgs",
"repos_url": "https://api.github.com/users/amirj/repos",
"events_url": "https://api.github.com/users/amirj/events{/privacy}",
"received_events_url": "https://api.github.com/users/amirj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @amirj, thanks for reporting.\r\n\r\nAt first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.\r\n\r\nFeel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made.",
"@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:\r\n\r\n```\r\nfrom elasticsearch import Elasticsearch\r\nes_client = Elasticsearch(\"http://localhost:9200\")\r\ndataset.add_elasticsearch_index(column=\"e1\", es_client=es_client, es_index_name=\"e1_index\")\r\n```",
"Hi @amirj, \r\n\r\nI really think it is a version incompatibility issue between your Elasticsearch client and server:\r\n- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'\r\n- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`\r\n\r\nMoreover:\r\n- Looking at your stack trace, I deduce you are using Elasticsearch client **\"8\"** major version:\r\n - the Elasticsearch file \"elasticsearch/_sync/client/utils.py\" was created in version \"8.0.0a1\": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4\r\n - you can check your Elasticsearch client version by running this Python code:\r\n ```python\r\n import elasticsearch\r\n print(elasticsearch.__version__)\r\n ```\r\n\r\n- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **\"7\"** (\"7.10.2-SNAPSHOT\")\r\n\r\nCould you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists?",
"I'm closing this issue, @amirj.\r\n\r\nFeel free to re-open it if the problem persists. \r\n\r\n"
] | 1,647,517,393,000 | 1,648,454,503,000 | 1,648,454,401,000 | NONE | null | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3956/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3955/comments | https://api.github.com/repos/huggingface/datasets/issues/3955/events | https://github.com/huggingface/datasets/pull/3955 | 1,172,246,647 | PR_kwDODunzps40l5kG | 3,955 | Remove unncessary 'pylint disable' message in ReadMe | {
"login": "Datta0",
"id": 39181234,
"node_id": "MDQ6VXNlcjM5MTgxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39181234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Datta0",
"html_url": "https://github.com/Datta0",
"followers_url": "https://api.github.com/users/Datta0/followers",
"following_url": "https://api.github.com/users/Datta0/following{/other_user}",
"gists_url": "https://api.github.com/users/Datta0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Datta0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Datta0/subscriptions",
"organizations_url": "https://api.github.com/users/Datta0/orgs",
"repos_url": "https://api.github.com/users/Datta0/repos",
"events_url": "https://api.github.com/users/Datta0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Datta0/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,647,515,815,000 | 1,647,515,815,000 | null | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3955",
"html_url": "https://github.com/huggingface/datasets/pull/3955",
"diff_url": "https://github.com/huggingface/datasets/pull/3955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3955.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3954/comments | https://api.github.com/repos/huggingface/datasets/issues/3954/events | https://github.com/huggingface/datasets/issues/3954 | 1,172,141,664 | I_kwDODunzps5F3XZg | 3,954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | {
"login": "MatanBenChorin",
"id": 49593805,
"node_id": "MDQ6VXNlcjQ5NTkzODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/49593805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatanBenChorin",
"html_url": "https://github.com/MatanBenChorin",
"followers_url": "https://api.github.com/users/MatanBenChorin/followers",
"following_url": "https://api.github.com/users/MatanBenChorin/following{/other_user}",
"gists_url": "https://api.github.com/users/MatanBenChorin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatanBenChorin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatanBenChorin/subscriptions",
"organizations_url": "https://api.github.com/users/MatanBenChorin/orgs",
"repos_url": "https://api.github.com/users/MatanBenChorin/repos",
"events_url": "https://api.github.com/users/MatanBenChorin/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatanBenChorin/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.",
"Hi, \r\nThank you",
"Thanks for reporting. We are looking at it and will give updates here.",
"I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:\r\n\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: NameError\r\nMessage: name 'HebrewSquad' is not defined\r\n```",
"The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> hf_token = \"hf_...\" # <- required because the dataset is gated\r\n>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)\r\n...\r\nNameError: name 'HebrewSquad' is not defined\r\n```",
"Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)\r\n\r\nHere is the fix @MatanBenChorin :\r\n\r\n```diff\r\n- HebrewSquad(\r\n+ HebrewSquadConfig(\r\n```"
] | 1,647,509,891,000 | 1,649,341,433,000 | null | NONE | null | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3954/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3953/comments | https://api.github.com/repos/huggingface/datasets/issues/3953/events | https://github.com/huggingface/datasets/issues/3953 | 1,172,123,736 | I_kwDODunzps5F3TBY | 3,953 | Add ImageNet Sketch | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | {
"login": "choprahetarth",
"id": 34271010,
"node_id": "MDQ6VXNlcjM0MjcxMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/34271010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/choprahetarth",
"html_url": "https://github.com/choprahetarth",
"followers_url": "https://api.github.com/users/choprahetarth/followers",
"following_url": "https://api.github.com/users/choprahetarth/following{/other_user}",
"gists_url": "https://api.github.com/users/choprahetarth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/choprahetarth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/choprahetarth/subscriptions",
"organizations_url": "https://api.github.com/users/choprahetarth/orgs",
"repos_url": "https://api.github.com/users/choprahetarth/repos",
"events_url": "https://api.github.com/users/choprahetarth/events{/privacy}",
"received_events_url": "https://api.github.com/users/choprahetarth/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "choprahetarth",
"id": 34271010,
"node_id": "MDQ6VXNlcjM0MjcxMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/34271010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/choprahetarth",
"html_url": "https://github.com/choprahetarth",
"followers_url": "https://api.github.com/users/choprahetarth/followers",
"following_url": "https://api.github.com/users/choprahetarth/following{/other_user}",
"gists_url": "https://api.github.com/users/choprahetarth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/choprahetarth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/choprahetarth/subscriptions",
"organizations_url": "https://api.github.com/users/choprahetarth/orgs",
"repos_url": "https://api.github.com/users/choprahetarth/repos",
"events_url": "https://api.github.com/users/choprahetarth/events{/privacy}",
"received_events_url": "https://api.github.com/users/choprahetarth/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Can you assign this task to me? @nreimers @mariosasko ",
"Hi! Sure! Let us know if you need any pointers."
] | 1,647,508,831,000 | 1,647,611,694,000 | null | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** ImageNet Sketch
- **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale.
- **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549)
- **Data:** https://github.com/HaohanWang/ImageNet-Sketch
- **Motivation:** Allows for evaluating the robustness of vision models.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3953/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3952/comments | https://api.github.com/repos/huggingface/datasets/issues/3952/events | https://github.com/huggingface/datasets/issues/3952 | 1,171,895,531 | I_kwDODunzps5F2bTr | 3,952 | Checksum error for glue sst2, stsb, rte etc datasets | {
"login": "ravindra-ut",
"id": 22090962,
"node_id": "MDQ6VXNlcjIyMDkwOTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/22090962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravindra-ut",
"html_url": "https://github.com/ravindra-ut",
"followers_url": "https://api.github.com/users/ravindra-ut/followers",
"following_url": "https://api.github.com/users/ravindra-ut/following{/other_user}",
"gists_url": "https://api.github.com/users/ravindra-ut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravindra-ut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravindra-ut/subscriptions",
"organizations_url": "https://api.github.com/users/ravindra-ut/orgs",
"repos_url": "https://api.github.com/users/ravindra-ut/repos",
"events_url": "https://api.github.com/users/ravindra-ut/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravindra-ut/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi, @ravindra-ut.\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"glue\", \"sst2\")\r\nDownloading builder script: 28.8kB [00:00, 11.6MB/s] \r\nDownloading metadata: 28.7kB [00:00, 12.9MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.44M/7.44M [00:01<00:00, 5.82MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 895.96it/s]\r\n\r\nIn [3]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n``` \r\n\r\nMoreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com\r\nHowever, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229\r\n\r\nCould you please try to update `datasets`\r\n```shell\r\npip install -U datasets\r\n```\r\nand then force redownload\r\n```python\r\nds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\n```\r\nto update the cache?\r\n\r\nPlease, feel free to reopen this issue if the problem persists."
] | 1,647,488,747,000 | 1,647,501,015,000 | 1,647,501,014,000 | NONE | null | ## Describe the bug
Checksum error for glue sst2, stsb, rte etc datasets
## Steps to reproduce the bug
```python
>>> nlp.load_dataset('glue', 'sst2')
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73.0/73.0 [00:00<00:00, 18.2kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Expected results
dataset load should succeed without checksum error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Environment info
- `datasets` version: '1.18.3'
- Platform: Mac OS
- Python version: Python 3.8.9
- PyArrow version: '7.0.0'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3952/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3951/comments | https://api.github.com/repos/huggingface/datasets/issues/3951/events | https://github.com/huggingface/datasets/issues/3951 | 1,171,568,814 | I_kwDODunzps5F1Liu | 3,951 | Forked streaming datasets try to `open` data urls rather than use network | {
"login": "dlwh",
"id": 9633,
"node_id": "MDQ6VXNlcjk2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlwh",
"html_url": "https://github.com/dlwh",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"repos_url": "https://api.github.com/users/dlwh/repos",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this"
] | 1,647,465,662,000 | 1,648,472,470,000 | null | NONE | null | ## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
```python
from multiprocessing import freeze_support
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
import torch.utils.data
# work around #3950
class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset):
pass
def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset:
return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling)
if __name__ == '__main__':
freeze_support()
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
ds = _ensure_format(ds)
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
I'd expect the dataset to load the url correctly and produce examples.
## Actual results
```
warnings.warn(
***** Running training *****
Num examples = 8000
Num Epochs = 9223372036854775807
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 1000
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__
for key, example in self._iter():
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter
yield from ex_iterable
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples
with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15.
0%| | 0/1000 [00:02<?, ?it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3951/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3950/comments | https://api.github.com/repos/huggingface/datasets/issues/3950/events | https://github.com/huggingface/datasets/issues/3950 | 1,171,560,585 | I_kwDODunzps5F1JiJ | 3,950 | Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1 | {
"login": "dlwh",
"id": 9633,
"node_id": "MDQ6VXNlcjk2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlwh",
"html_url": "https://github.com/dlwh",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"repos_url": "https://api.github.com/users/dlwh/repos",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also taking a look at your second issue, which is more technical"
] | 1,647,465,251,000 | 1,649,076,320,000 | null | NONE | null | ## Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
## Steps to reproduce the bug
```python
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error.
## Actual results
```
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
return self._get_iterator()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__
w.start()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset'
0%| | 0/1000 [00:00<?, ?it/s]
```
This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together)
Note that if you bypass this crash you get another crash. (I'll file a separate bug).
## Environment info
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3950/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3949/comments | https://api.github.com/repos/huggingface/datasets/issues/3949/events | https://github.com/huggingface/datasets/pull/3949 | 1,171,467,981 | PR_kwDODunzps40jia- | 3,949 | Remove GLEU metric | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3949). All of your documentation changes will be reflected on that endpoint."
] | 1,647,459,331,000 | 1,647,526,056,000 | null | CONTRIBUTOR | null | Remove the GLEU metric as it is not actually implemented. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3949/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3949",
"html_url": "https://github.com/huggingface/datasets/pull/3949",
"diff_url": "https://github.com/huggingface/datasets/pull/3949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3949.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3948/comments | https://api.github.com/repos/huggingface/datasets/issues/3948/events | https://github.com/huggingface/datasets/pull/3948 | 1,171,460,560 | PR_kwDODunzps40jg1F | 3,948 | Google BLEU Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"A few things that aren't clear for me:\r\n- \"Because it performs better on individual sentence pairs as compared to BLEU, Google BLEU has also been used in RL experiments.\" -- why is this the case? why would that make it more usable for RL? (also, you should put \"Reinforcement Learning\" explicitly, not just the acronym)\r\n- (Minor issue) -- I put inputs before the first example code, I think that's clearer somehow\r\n\r\nOtherwise, it looks great, good job @emibaylor !\r\n"
] | 1,647,458,837,000 | 1,647,878,666,000 | 1,647,878,665,000 | CONTRIBUTOR | null | Add metric card for Google BLEU (GLEU) metric
One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing this would be a good thing to put on the to do list for the future. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3948/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3948",
"html_url": "https://github.com/huggingface/datasets/pull/3948",
"diff_url": "https://github.com/huggingface/datasets/pull/3948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3948.patch",
"merged_at": 1647878665000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3947/comments | https://api.github.com/repos/huggingface/datasets/issues/3947/events | https://github.com/huggingface/datasets/pull/3947 | 1,171,452,854 | PR_kwDODunzps40jfLq | 3,947 | BLEU metric card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some thoughts:\r\n- For values, e.g. \"Defaults to False\", I would put False in code: `False`. Same for : \"Defaults to `4`.\"\r\n- I would put the following remark in \"Limitations\": \r\n> \"BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.\"\r\n\r\n- Add some values from the original BLEU paper (https://aclanthology.org/P02-1040.pdf)"
] | 1,647,458,407,000 | 1,648,565,990,000 | 1,648,565,654,000 | CONTRIBUTOR | null | Add BLEU metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3947",
"html_url": "https://github.com/huggingface/datasets/pull/3947",
"diff_url": "https://github.com/huggingface/datasets/pull/3947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3947.patch",
"merged_at": 1648565653000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3946/comments | https://api.github.com/repos/huggingface/datasets/issues/3946/events | https://github.com/huggingface/datasets/pull/3946 | 1,171,239,287 | PR_kwDODunzps40i1L3 | 3,946 | Add newline to text dataset builder for controlling universal newlines mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3946). All of your documentation changes will be reflected on that endpoint.",
"The failing CI test has nothing to do with this PR."
] | 1,647,447,071,000 | 1,647,958,817,000 | null | MEMBER | null | Fix #3804. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3946",
"html_url": "https://github.com/huggingface/datasets/pull/3946",
"diff_url": "https://github.com/huggingface/datasets/pull/3946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3946.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3945/comments | https://api.github.com/repos/huggingface/datasets/issues/3945/events | https://github.com/huggingface/datasets/pull/3945 | 1,171,222,257 | PR_kwDODunzps40ixmc | 3,945 | Fix comet metric | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Finally I'm done updating the dependencies ^^'\r\n\r\ncc @sashavor can you review my changes in the metric card please ?",
"Looks good to me! Just fixed a tiny typo :wink: ",
"Thanks !"
] | 1,647,446,207,000 | 1,647,961,812,000 | 1,647,961,530,000 | MEMBER | null | The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed.
This PR fixes the metric, updates the download_model mock and updates the doctest. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3945",
"html_url": "https://github.com/huggingface/datasets/pull/3945",
"diff_url": "https://github.com/huggingface/datasets/pull/3945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3945.patch",
"merged_at": 1647961530000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3944/comments | https://api.github.com/repos/huggingface/datasets/issues/3944/events | https://github.com/huggingface/datasets/pull/3944 | 1,171,209,510 | PR_kwDODunzps40iu4n | 3,944 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,445,586,000 | 1,647,539,454,000 | 1,647,539,225,000 | CONTRIBUTOR | null | Proposing COMET metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3944/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3944",
"html_url": "https://github.com/huggingface/datasets/pull/3944",
"diff_url": "https://github.com/huggingface/datasets/pull/3944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3944.patch",
"merged_at": 1647539225000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3943/comments | https://api.github.com/repos/huggingface/datasets/issues/3943/events | https://github.com/huggingface/datasets/pull/3943 | 1,171,185,070 | PR_kwDODunzps40ipnu | 3,943 | [Doc] Don't use v for version tags on GitHub | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3943). All of your documentation changes will be reflected on that endpoint."
] | 1,647,444,510,000 | 1,647,517,586,000 | 1,647,517,585,000 | MEMBER | null | This removes the `v` automatically used by `doc-builder` for versions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3943/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3943",
"html_url": "https://github.com/huggingface/datasets/pull/3943",
"diff_url": "https://github.com/huggingface/datasets/pull/3943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3943.patch",
"merged_at": 1647517585000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3942/comments | https://api.github.com/repos/huggingface/datasets/issues/3942/events | https://github.com/huggingface/datasets/issues/3942 | 1,171,177,122 | I_kwDODunzps5Fzr6i | 3,942 | reddit_tifu dataset: Checksums didn't match for dataset source files | {
"login": "XingxingZhang",
"id": 8507585,
"node_id": "MDQ6VXNlcjg1MDc1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XingxingZhang",
"html_url": "https://github.com/XingxingZhang",
"followers_url": "https://api.github.com/users/XingxingZhang/followers",
"following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions",
"organizations_url": "https://api.github.com/users/XingxingZhang/orgs",
"repos_url": "https://api.github.com/users/XingxingZhang/repos",
"events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/XingxingZhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773",
"thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset(\"...\", download_mode=\"force_redownload\")` fixed \r\n the bug.\r\n\r\nusing the following as you suggested in another thread can also fixed the bug\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n",
"The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.\r\n\r\nYou can now install from PyPI, as usual:\r\n```shell\r\npip install -U datasets\r\n```\r\n"
] | 1,647,444,210,000 | 1,647,446,263,000 | 1,647,445,165,000 | NONE | null | ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3942/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3941/comments | https://api.github.com/repos/huggingface/datasets/issues/3941/events | https://github.com/huggingface/datasets/issues/3941 | 1,171,132,709 | I_kwDODunzps5FzhEl | 3,941 | billsum dataset: Checksums didn't match for dataset source files: | {
"login": "XingxingZhang",
"id": 8507585,
"node_id": "MDQ6VXNlcjg1MDc1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XingxingZhang",
"html_url": "https://github.com/XingxingZhang",
"followers_url": "https://api.github.com/users/XingxingZhang/followers",
"following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions",
"organizations_url": "https://api.github.com/users/XingxingZhang/orgs",
"repos_url": "https://api.github.com/users/XingxingZhang/repos",
"events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/XingxingZhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @XingxingZhang, thanks for reporting.\r\n\r\nThis was due to a change in Google Drive service:\r\n- #3786 \r\n\r\nWe have already fixed it:\r\n- #3787\r\n\r\nYou should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```",
"thanks @albertvillanova "
] | 1,647,442,328,000 | 1,647,446,228,000 | 1,647,445,604,000 | NONE | null | ## Describe the bug
When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files"
```
File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx']
```
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
load_dataset('billsum')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3941/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3940/comments | https://api.github.com/repos/huggingface/datasets/issues/3940/events | https://github.com/huggingface/datasets/pull/3940 | 1,171,106,853 | PR_kwDODunzps40iYxr | 3,940 | Create CoVAL metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,441,109,000 | 1,647,625,079,000 | 1,647,624,914,000 | CONTRIBUTOR | null | Initial CoVAL metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3940",
"html_url": "https://github.com/huggingface/datasets/pull/3940",
"diff_url": "https://github.com/huggingface/datasets/pull/3940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3940.patch",
"merged_at": 1647624914000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3939/comments | https://api.github.com/repos/huggingface/datasets/issues/3939/events | https://github.com/huggingface/datasets/issues/3939 | 1,170,882,331 | I_kwDODunzps5Fyj8b | 3,939 | Source links broken | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/",
"@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).\r\n\r\nTherefore, we have to do one of 2 options below:\r\n1. Make necessary changes on doc-builder side\r\nOR\r\n2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)\r\n\r\nI'll let you decide @albertvillanova @lhoestq @sgugger ",
"I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-)",
"For me it is OK to conform to the rest of libraries and tag/release with a preceding \"v\", rather than adding an extra argument to the doc builder just for `datasets`.\r\n\r\nLet me know if it is also OK for you @lhoestq. ",
"https://github.com/huggingface/doc-build/commit/f41c1e8ff900724213af4c75d287d8b61ecf6141\r\n\r\nhotfix so that `datasets` docs source button works correctly on hf.co/docs/datasets",
"We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.\r\n\r\nI think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine",
"This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).\r\n\r\nNote that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release)",
"Thanks for fixing @sgugger."
] | 1,647,429,467,000 | 1,647,664,892,000 | 1,647,664,892,000 | CONTRIBUTOR | null | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3939/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3938/comments | https://api.github.com/repos/huggingface/datasets/issues/3938/events | https://github.com/huggingface/datasets/pull/3938 | 1,170,875,417 | PR_kwDODunzps40hnjM | 3,938 | Avoid info log messages from transformers in FrugalScore metric | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3938). All of your documentation changes will be reflected on that endpoint."
] | 1,647,429,089,000 | 1,647,506,245,000 | 1,647,506,244,000 | MEMBER | null | Fix #3928. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3938/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3938",
"html_url": "https://github.com/huggingface/datasets/pull/3938",
"diff_url": "https://github.com/huggingface/datasets/pull/3938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3938.patch",
"merged_at": 1647506244000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3937/comments | https://api.github.com/repos/huggingface/datasets/issues/3937/events | https://github.com/huggingface/datasets/issues/3937 | 1,170,832,006 | I_kwDODunzps5FyXqG | 3,937 | Missing languages in lvwerra/github-code dataset | {
"login": "Eytan-S",
"id": 38702500,
"node_id": "MDQ6VXNlcjM4NzAyNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/38702500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eytan-S",
"html_url": "https://github.com/Eytan-S",
"followers_url": "https://api.github.com/users/Eytan-S/followers",
"following_url": "https://api.github.com/users/Eytan-S/following{/other_user}",
"gists_url": "https://api.github.com/users/Eytan-S/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eytan-S/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eytan-S/subscriptions",
"organizations_url": "https://api.github.com/users/Eytan-S/orgs",
"repos_url": "https://api.github.com/users/Eytan-S/repos",
"events_url": "https://api.github.com/users/Eytan-S/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eytan-S/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ",
"That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.\r\n\r\nThanks for reporting this @Eytan-S!",
"Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:\r\n```Python\r\n{'Assembly': 82847,\r\n 'Batchfile': 236755,\r\n 'C': 14127969,\r\n 'C#': 6793439,\r\n 'C++': 7368473,\r\n 'CMake': 175076,\r\n 'CSS': 1733625,\r\n 'Dockerfile': 331966,\r\n 'FORTRAN': 141963,\r\n 'GO': 2259363,\r\n 'Haskell': 340521,\r\n 'HTML': 11165464,\r\n 'Java': 19515696,\r\n 'JavaScript': 11829024,\r\n 'Julia': 58177,\r\n 'Lua': 576279,\r\n 'Makefile': 679338,\r\n 'Markdown': 8454049,\r\n 'PHP': 11181930,\r\n 'Perl': 497490,\r\n 'PowerShell': 136827,\r\n 'Python': 7203553,\r\n 'Ruby': 4479767,\r\n 'Rust': 321765,\r\n 'SQL': 655657,\r\n 'Scala': 0,\r\n 'Shell': 1382786,\r\n 'TypeScript': 0,\r\n 'TeX': 250764,\r\n 'Visual Basic': 155371}\r\n ```",
"@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:\r\n\r\n| | Language |File Count| Size (GB)|\r\n|---:|:-------------|---------:|-------:|\r\n| 0 | Java | 19548190 | 107.7 |\r\n| 1 | C | 14143113 | 183.83 |\r\n| 2 | JavaScript | 11839883 | 87.82 |\r\n| 3 | HTML | 11178557 | 118.12 |\r\n| 4 | PHP | 11177610 | 61.41 |\r\n| 5 | Markdown | 8464626 | 23.09 |\r\n| 6 | C++ | 7380520 | 87.73 |\r\n| 7 | Python | 7226626 | 52.03 |\r\n| 8 | C# | 6811652 | 36.83 |\r\n| 9 | Ruby | 4473331 | 10.95 |\r\n| 10 | GO | 2265436 | 19.28 |\r\n| 11 | TypeScript | 1940406 | 24.59 |\r\n| 12 | CSS | 1734406 | 22.67 |\r\n| 13 | Shell | 1385648 | 3.01 |\r\n| 14 | Scala | 835755 | 3.87 |\r\n| 15 | Makefile | 679430 | 2.92 |\r\n| 16 | SQL | 656671 | 5.67 |\r\n| 17 | Lua | 578554 | 2.81 |\r\n| 18 | Perl | 497949 | 4.7 |\r\n| 19 | Dockerfile | 366505 | 0.71 |\r\n| 20 | Haskell | 340623 | 1.85 |\r\n| 21 | Rust | 322431 | 2.68 |\r\n| 22 | TeX | 251015 | 2.15 |\r\n| 23 | Batchfile | 236945 | 0.7 |\r\n| 24 | CMake | 175282 | 0.54 |\r\n| 25 | Visual Basic | 155652 | 1.91 |\r\n| 26 | FORTRAN | 142038 | 1.62 |\r\n| 27 | PowerShell | 136846 | 0.69 |\r\n| 28 | Assembly | 82905 | 0.78 |\r\n| 29 | Julia | 58317 | 0.29 |",
"Thanks @lvwerra. "
] | 1,647,426,723,000 | 1,647,932,963,000 | 1,647,874,247,000 | NONE | null | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3937/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3936/comments | https://api.github.com/repos/huggingface/datasets/issues/3936/events | https://github.com/huggingface/datasets/pull/3936 | 1,170,713,473 | PR_kwDODunzps40hE-P | 3,936 | Fix Wikipedia version and re-add tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint."
] | 1,647,420,484,000 | 1,647,450,247,000 | 1,647,450,245,000 | MEMBER | null | To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301":
- de
- en
- fr
- frr
- it
- simple
These pre-processed data can be accessed, e.g.:
```python
ds = load_dataset("wikipedia", "20220301.frr", split="train")
```
The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3936/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3936",
"html_url": "https://github.com/huggingface/datasets/pull/3936",
"diff_url": "https://github.com/huggingface/datasets/pull/3936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3936.patch",
"merged_at": 1647450245000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3934/comments | https://api.github.com/repos/huggingface/datasets/issues/3934/events | https://github.com/huggingface/datasets/pull/3934 | 1,170,292,492 | PR_kwDODunzps40ftiC | 3,934 | Create MAUVE metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,380,167,000 | 1,647,625,094,000 | 1,647,624,853,000 | CONTRIBUTOR | null | Proposing a MAUVE metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3934",
"html_url": "https://github.com/huggingface/datasets/pull/3934",
"diff_url": "https://github.com/huggingface/datasets/pull/3934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3934.patch",
"merged_at": 1647624853000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3933/comments | https://api.github.com/repos/huggingface/datasets/issues/3933/events | https://github.com/huggingface/datasets/pull/3933 | 1,170,253,605 | PR_kwDODunzps40flNM | 3,933 | Update README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,377,525,000 | 1,647,539,484,000 | 1,647,539,257,000 | CONTRIBUTOR | null | Fixing missing triple quote | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3933",
"html_url": "https://github.com/huggingface/datasets/pull/3933",
"diff_url": "https://github.com/huggingface/datasets/pull/3933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3933.patch",
"merged_at": 1647539257000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3932/comments | https://api.github.com/repos/huggingface/datasets/issues/3932/events | https://github.com/huggingface/datasets/pull/3932 | 1,170,221,773 | PR_kwDODunzps40fd0T | 3,932 | Create SARI metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,376,643,000 | 1,647,625,021,000 | 1,647,624,775,000 | CONTRIBUTOR | null | SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3932",
"html_url": "https://github.com/huggingface/datasets/pull/3932",
"diff_url": "https://github.com/huggingface/datasets/pull/3932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3932.patch",
"merged_at": 1647624775000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3931/comments | https://api.github.com/repos/huggingface/datasets/issues/3931/events | https://github.com/huggingface/datasets/pull/3931 | 1,170,097,208 | PR_kwDODunzps40fBjx | 3,931 | Add align_labels_with_mapping docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,372,297,000 | 1,647,620,911,000 | 1,647,620,673,000 | MEMBER | null | This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko π ).
For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3931",
"html_url": "https://github.com/huggingface/datasets/pull/3931",
"diff_url": "https://github.com/huggingface/datasets/pull/3931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3931.patch",
"merged_at": 1647620673000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3930/comments | https://api.github.com/repos/huggingface/datasets/issues/3930/events | https://github.com/huggingface/datasets/pull/3930 | 1,170,087,793 | PR_kwDODunzps40e_fb | 3,930 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,371,819,000 | 1,649,085,795,000 | 1,649,085,448,000 | CONTRIBUTOR | null | Creating a README for IndicGLUE
cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3930/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3930",
"html_url": "https://github.com/huggingface/datasets/pull/3930",
"diff_url": "https://github.com/huggingface/datasets/pull/3930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3930.patch",
"merged_at": 1649085448000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3929/comments | https://api.github.com/repos/huggingface/datasets/issues/3929/events | https://github.com/huggingface/datasets/issues/3929 | 1,170,066,235 | I_kwDODunzps5Fvcs7 | 3,929 | Load a local dataset twice | {
"login": "caush",
"id": 28349961,
"node_id": "MDQ6VXNlcjI4MzQ5OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28349961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caush",
"html_url": "https://github.com/caush",
"followers_url": "https://api.github.com/users/caush/followers",
"following_url": "https://api.github.com/users/caush/following{/other_user}",
"gists_url": "https://api.github.com/users/caush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caush/subscriptions",
"organizations_url": "https://api.github.com/users/caush/orgs",
"repos_url": "https://api.github.com/users/caush/repos",
"events_url": "https://api.github.com/users/caush/events{/privacy}",
"received_events_url": "https://api.github.com/users/caush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_dataset(\"csv\", data_dir=\"data\")\r\n```\r\n\r\nAlternatively, you may also use:\r\n```python\r\ndataset = load_dataset(\"data\")"
] | 1,647,370,766,000 | 1,647,424,509,000 | 1,647,424,446,000 | NONE | null | ## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected results
Should give something like (because files have only one data row):
Title, clicks
Truc et astuce, 123
Machin, 12
## Actual results
Gives
Title, clicks
Truc et astuce, 123
Machin, 12
Truc et astuce, 123
Machin, 12
## Environment info
[file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv)
[file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv)
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3929/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3928/comments | https://api.github.com/repos/huggingface/datasets/issues/3928/events | https://github.com/huggingface/datasets/issues/3928 | 1,170,017,132 | I_kwDODunzps5FvQts | 3,928 | Frugal score deprecations | {
"login": "Ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ierezell",
"html_url": "https://github.com/Ierezell",
"followers_url": "https://api.github.com/users/Ierezell/followers",
"following_url": "https://api.github.com/users/Ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/Ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/Ierezell/orgs",
"repos_url": "https://api.github.com/users/Ierezell/repos",
"events_url": "https://api.github.com/users/Ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ierezell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. "
] | 1,647,367,842,000 | 1,647,506,244,000 | 1,647,506,244,000 | NONE | null | ## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"])
```
## Expected results
A clear and concise description of the expected results.
```
{'scores': [0.9946]}
```
## Actual results
Specify the actual results or traceback.
```
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 864.09ba/s]
Using amp half precision backend
The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 1
Batch size = 64
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 4644.85it/s]
{'scores': [0.9946]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3928/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3927/comments | https://api.github.com/repos/huggingface/datasets/issues/3927/events | https://github.com/huggingface/datasets/pull/3927 | 1,170,016,465 | PR_kwDODunzps40ewN2 | 3,927 | Update main readme | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"What do you think @albertvillanova ?"
] | 1,647,367,799,000 | 1,648,548,827,000 | 1,648,548,500,000 | MEMBER | null | The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3927/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3927",
"html_url": "https://github.com/huggingface/datasets/pull/3927",
"diff_url": "https://github.com/huggingface/datasets/pull/3927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3927.patch",
"merged_at": 1648548500000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3926/comments | https://api.github.com/repos/huggingface/datasets/issues/3926/events | https://github.com/huggingface/datasets/pull/3926 | 1,169,945,052 | PR_kwDODunzps40ehVP | 3,926 | Doc maintenance | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3926). All of your documentation changes will be reflected on that endpoint."
] | 1,647,363,646,000 | 1,647,372,435,000 | 1,647,372,432,000 | MEMBER | null | This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3926/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3926",
"html_url": "https://github.com/huggingface/datasets/pull/3926",
"diff_url": "https://github.com/huggingface/datasets/pull/3926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3926.patch",
"merged_at": 1647372432000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3925/comments | https://api.github.com/repos/huggingface/datasets/issues/3925/events | https://github.com/huggingface/datasets/pull/3925 | 1,169,913,769 | PR_kwDODunzps40eaq8 | 3,925 | Fix main_classes docs index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm it's still not good \r\n![image](https://user-images.githubusercontent.com/42851186/158429361-e19ce25b-c259-4ded-8473-075deafdbb96.png)\r\n\r\nany idea what could cause this ?",
"Ok fixed :)"
] | 1,647,362,026,000 | 1,647,956,951,000 | 1,647,956,644,000 | MEMBER | null | Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types
![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3925",
"html_url": "https://github.com/huggingface/datasets/pull/3925",
"diff_url": "https://github.com/huggingface/datasets/pull/3925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3925.patch",
"merged_at": 1647956644000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3924/comments | https://api.github.com/repos/huggingface/datasets/issues/3924/events | https://github.com/huggingface/datasets/pull/3924 | 1,169,805,813 | PR_kwDODunzps40eED5 | 3,924 | Document cases for github datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.",
"Yay!"
] | 1,647,357,010,000 | 1,649,183,595,000 | 1,647,358,883,000 | MEMBER | null | In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases.
I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github:
- when you need the dataset to be reviewed
- when you need long-term maintenance from the HF team
- when thereβs no clear org name / namespace that you can put the dataset under | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3924",
"html_url": "https://github.com/huggingface/datasets/pull/3924",
"diff_url": "https://github.com/huggingface/datasets/pull/3924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3924.patch",
"merged_at": 1647358883000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3923/comments | https://api.github.com/repos/huggingface/datasets/issues/3923/events | https://github.com/huggingface/datasets/pull/3923 | 1,169,773,869 | PR_kwDODunzps40d9YU | 3,923 | Add methods to IterableDatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3923). All of your documentation changes will be reflected on that endpoint."
] | 1,647,355,563,000 | 1,647,362,708,000 | 1,647,362,706,000 | MEMBER | null | Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict:
- map
- filter
- shuffle
- with_format
- cast
- cast_column
- remove_columns
- rename_column
- rename_columns
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3923/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3923",
"html_url": "https://github.com/huggingface/datasets/pull/3923",
"diff_url": "https://github.com/huggingface/datasets/pull/3923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3923.patch",
"merged_at": 1647362706000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3922/comments | https://api.github.com/repos/huggingface/datasets/issues/3922/events | https://github.com/huggingface/datasets/pull/3922 | 1,169,761,293 | PR_kwDODunzps40d6vm | 3,922 | Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_3922). All of your documentation changes will be reflected on that endpoint.",
"Unrelated CI test failure. This PR can be merged."
] | 1,647,354,988,000 | 1,647,360,424,000 | 1,647,360,423,000 | MEMBER | null | Fix #2957 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3922",
"html_url": "https://github.com/huggingface/datasets/pull/3922",
"diff_url": "https://github.com/huggingface/datasets/pull/3922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3922.patch",
"merged_at": 1647360422000
} | true |