url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.51B
| node_id
stringlengths 18
32
| number
int64 1
7.14k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
β | created_at
timestamp[ns] | updated_at
timestamp[ns] | closed_at
timestamp[ns] | author_association
stringclasses 4
values | active_lock_reason
float64 | draft
float64 0
1
β | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7143/comments | https://api.github.com/repos/huggingface/datasets/issues/7143/events | https://github.com/huggingface/datasets/pull/7143 | 2,512,327,211 | PR_kwDODunzps56xCm6 | 7,143 | Modify add_column() to optionally accept a pyarrow schema as param | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar"
} | [] | open | false | null | [] | null | [
"Requesting review @lhoestq \r\nI will also update the docs if this looks good.",
"Cool ! maybe you can rename the argument `feature` and with type `FeatureType` ? This way it would work the same way as `.cast_column()` ?",
"@lhoestq Since there is no way to get a `pyarrow.Schema` from a `FeatureType`, I had to go via `Features`. How does this look?",
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7143). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq done!"
] | 2024-09-08T10:56:57 | 2024-09-08T11:10:17 | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7143.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7143",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7143.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7143"
} | [Open Issue](https://github.com/huggingface/datasets/issues/7142)
**Before (Add + Cast)**:
```
from datasets import load_dataset, Value
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
ds = ds.add_column("new_col", lst)
# Assigns int64 to new_col by default
print(ds.features)
ds = ds.cast_column("new_col", Value(dtype="uint16", id=None))
print(ds.features)
```
**Before (Numpy Workaround)**:
```
from datasets import load_dataset
import numpy as np
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
ds = ds.add_column("new_col", np.array(lst, dtype=np.uint16))
print(ds.features)
```
**After**:
```
from datasets import load_dataset
import pyarrow as pa
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
schema = pa.schema([("new_col", pa.uint16())])
ds = ds.add_column("new_col", lst, pyarrow_schema=schema)
print(ds.features)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7143/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7142/comments | https://api.github.com/repos/huggingface/datasets/issues/7142/events | https://github.com/huggingface/datasets/issues/7142 | 2,512,244,938 | I_kwDODunzps6VvdDK | 7,142 | Specifying datatype when adding a column to a dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"#self-assign"
] | 2024-09-08T07:34:24 | 2024-09-08T07:35:26 | null | NONE | null | null | null | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function.
IMO this functionality should be natively supported.
https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674
### Your contribution
I can submit a PR for this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7142/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7141/comments | https://api.github.com/repos/huggingface/datasets/issues/7141/events | https://github.com/huggingface/datasets/issues/7141 | 2,510,797,653 | I_kwDODunzps6Vp7tV | 7,141 | Older datasets throwing safety errors with 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4",
"events_url": "https://api.github.com/users/alvations/events{/privacy}",
"followers_url": "https://api.github.com/users/alvations/followers",
"following_url": "https://api.github.com/users/alvations/following{/other_user}",
"gists_url": "https://api.github.com/users/alvations/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvations",
"id": 1050316,
"login": "alvations",
"node_id": "MDQ6VXNlcjEwNTAzMTY=",
"organizations_url": "https://api.github.com/users/alvations/orgs",
"received_events_url": "https://api.github.com/users/alvations/received_events",
"repos_url": "https://api.github.com/users/alvations/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvations/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvations"
} | [] | closed | false | null | [] | null | [
"I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval",
"Me too, didn't have this issue few hours ago.",
"same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?\r\n",
"Not a good idea, but commenting out the whole security block at `/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py` is a temporary workaround:\r\n\r\n```\r\n #security = kwargs.pop(\"security\", None)\r\n #if security is not None:\r\n # security = BlobSecurityInfo(\r\n # safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\n # )\r\n #self.security = security\r\n```\r\n",
"Uploading a dataset to Huggingface also results in the following error in the Dataset Preview:\r\n```\r\nThe full dataset viewer is not available (click to read why). Only showing a preview of the rows.\r\n'safe'\r\nError code: UnexpectedError\r\nNeed help to make the dataset viewer work? Make sure to review [how to configure the dataset viewer](link1), and [open a discussion](link2) for direct support.\r\n```\r\nI used jsonl format for the dataset in this case. Same exact dataset worked previously.",
"Same issue here. Even reverting to older version of `datasets` (e.g., `2.19.0`) results in same error:\r\n\r\n```python\r\n>>> datasets.load_dataset('allenai/ai2_arc', 'ARC-Easy')\r\n\r\nFile \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3048, in <listcomp>\r\n RepoFile(**path_info) if path_info[\"type\"] == \"file\" else RepoFolder(**path_info)\r\n File \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 534, in __init__\r\n safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\nKeyError: 'safe'\r\n```",
"i just had this issue a few minutes ago, crawled the internet and found nothing. came here to open an issue and found this. it is really frustrating. anyone found a fix?",
"hi, me and my team have the same problem",
"Yeah, this just suddenly appeared without client-side code changes, within the last hours.\r\n\r\nHere's a patch to fix the issue temporarily:\r\n```python\r\nimport huggingface_hub\r\ndef patched_repofolder_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.tree_id = kwargs.pop(\"oid\")\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n\r\n\r\ndef patched_repo_file_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.size = kwargs.pop(\"size\")\r\n self.blob_id = kwargs.pop(\"oid\")\r\n lfs = kwargs.pop(\"lfs\", None)\r\n if lfs is not None:\r\n lfs = huggingface_hub.hf_api.BlobLfsInfo(size=lfs[\"size\"], sha256=lfs[\"oid\"], pointer_size=lfs[\"pointerSize\"])\r\n self.lfs = lfs\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n self.security = None\r\n\r\n # backwards compatibility\r\n self.rfilename = self.path\r\n self.lastCommit = self.last_commit\r\n\r\n\r\nhuggingface_hub.hf_api.RepoFile.__init__ = patched_repo_file_init\r\nhuggingface_hub.hf_api.RepoFolder.__init__ = patched_repofolder_init\r\n```\r\n",
"Also discussed here:\r\nhttps://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/1",
"i'm thinking this should be a server issue, i mean no client code was changed on my end. so weird!",
"As far as I can tell, this seems to be happening with **all** datasets that use RepoFolder (probably represents most datasets on huggingface, right?)",
"> Here is a temporary fix for the problem: https://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/12?u=mlscientist\r\n\r\nthis doesn't seem to work!",
"In case you are using Colab or similar, remember to restart your session after modyfing the hf_api.py file",
"No need to modify the file directly, just monkey-patch.\r\n\r\nI'm now more sure that the error appears because the backend expects the api code to look like it does on `main`. If `RepoFile` and `RepoFolder` look about like they look on main, they work again.\r\n\r\nIf not fixed like above, a secondary error that will appear is \r\n```\r\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n \"tree_id\": path_info.tree_id,\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'RepoFolder' object has no attribute 'tree_id'\r\n```\r\n",
"We've reverted the deployment, please let us know if the issue still persists!",
"thanks @muellerzr!"
] | 2024-09-06T16:26:30 | 2024-09-06T21:14:14 | 2024-09-06T19:09:29 | NONE | null | null | null | ### Describe the bug
The dataset loading was throwing some safety errors for this popular dataset `wmt14`.
[in]:
```
import datasets
# train_data = datasets.load_dataset("wmt14", "de-en", split="train")
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
```
[out]:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>()
2
3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train")
----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train")
5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
12 frames
[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs)
636 if security is not None:
637 security = BlobSecurityInfo(
--> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"]
639 )
640 self.security = security
KeyError: 'safe'
```
### Steps to reproduce the bug
See above.
### Expected behavior
Dataset properly loaded.
### Environment info
version: 2.21.0 | {
"+1": 26,
"-1": 0,
"confused": 2,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 28,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7141/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7139/comments | https://api.github.com/repos/huggingface/datasets/issues/7139/events | https://github.com/huggingface/datasets/issues/7139 | 2,508,078,858 | I_kwDODunzps6Vfj8K | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/105094708?v=4",
"events_url": "https://api.github.com/users/fscdc/events{/privacy}",
"followers_url": "https://api.github.com/users/fscdc/followers",
"following_url": "https://api.github.com/users/fscdc/following{/other_user}",
"gists_url": "https://api.github.com/users/fscdc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fscdc",
"id": 105094708,
"login": "fscdc",
"node_id": "U_kgDOBkOeNA",
"organizations_url": "https://api.github.com/users/fscdc/orgs",
"received_events_url": "https://api.github.com/users/fscdc/received_events",
"repos_url": "https://api.github.com/users/fscdc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fscdc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fscdc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fscdc"
} | [] | open | false | null | [] | null | [] | 2024-09-05T15:12:22 | 2024-09-05T15:12:22 | null | NONE | null | null | null | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7139/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7138/comments | https://api.github.com/repos/huggingface/datasets/issues/7138/events | https://github.com/huggingface/datasets/issues/7138 | 2,507,738,308 | I_kwDODunzps6VeQzE | 7,138 | Cache only changed columns? | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`."
] | 2024-09-05T12:56:47 | 2024-09-05T13:56:13 | null | CONTRIBUTOR | null | null | null | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
### Your contribution
Is this even viable in the current architecture of the package?
I quickly looked into it and it seems it would require significant changes.
I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7138/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7137/comments | https://api.github.com/repos/huggingface/datasets/issues/7137/events | https://github.com/huggingface/datasets/issues/7137 | 2,506,851,048 | I_kwDODunzps6Va4Lo | 7,137 | dataset_info sequence format unexpected behavior in README.md YAML | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph"
} | [] | open | false | null | [] | null | [
"The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: string\r\n\r\n\r\n# data\r\n{\"answers\": {\"text\": \"ADDRESS\", \"label\": \"abc\"}}\r\n```"
] | 2024-09-05T06:06:06 | 2024-09-05T06:10:56 | null | NONE | null | null | null | ### Describe the bug
When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly.
My data looks like
```
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
```
My `dataset_info` in README.md is:
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
**Error log**:
```
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from list<item: struct<text: string, label: string>> to struct using function cast_struct
```
## Potential Reason
After some analysis, it turns out that my yaml config is requiring `dict[str, list[str]]` instead of `list[dict[str, str]]`. It would work if I change my data to
```
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
These following 2 different `dataset_info` are actually equivalent.
```
dataset_info:
- config_name: default
features:
- name: answers
dtype:
- name: text
sequence: string
- name: label
sequence: string
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
### Steps to reproduce the bug
```
# README.md
---
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "test.jsonl"
---
# test.jsonl
# expected but not working
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
# unexpected but working
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
### Expected behavior
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
Should work on following data format:
```
{"answers":[{"text":"ADDRESS", "label": "abc"}]}
```
### Environment info
- `datasets` version: 2.21.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7137/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7136/comments | https://api.github.com/repos/huggingface/datasets/issues/7136/events | https://github.com/huggingface/datasets/pull/7136 | 2,506,115,857 | PR_kwDODunzps56b9R- | 7,136 | Do not consume unnecessary memory during sharding | {
"avatar_url": "https://avatars.githubusercontent.com/u/12694897?v=4",
"events_url": "https://api.github.com/users/janEbert/events{/privacy}",
"followers_url": "https://api.github.com/users/janEbert/followers",
"following_url": "https://api.github.com/users/janEbert/following{/other_user}",
"gists_url": "https://api.github.com/users/janEbert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/janEbert",
"id": 12694897,
"login": "janEbert",
"node_id": "MDQ6VXNlcjEyNjk0ODk3",
"organizations_url": "https://api.github.com/users/janEbert/orgs",
"received_events_url": "https://api.github.com/users/janEbert/received_events",
"repos_url": "https://api.github.com/users/janEbert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/janEbert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janEbert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/janEbert"
} | [] | open | false | null | [] | null | [] | 2024-09-04T19:26:06 | 2024-09-04T19:28:23 | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7136",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7136"
} | When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it.
```shell
pytest tests/test_distributed.py -k iterable
```
Runs successfully. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7135/comments | https://api.github.com/repos/huggingface/datasets/issues/7135/events | https://github.com/huggingface/datasets/issues/7135 | 2,503,318,328 | I_kwDODunzps6VNZs4 | 7,135 | Bug: Type Mismatch in Dataset Mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/45327989?v=4",
"events_url": "https://api.github.com/users/marko1616/events{/privacy}",
"followers_url": "https://api.github.com/users/marko1616/followers",
"following_url": "https://api.github.com/users/marko1616/following{/other_user}",
"gists_url": "https://api.github.com/users/marko1616/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marko1616",
"id": 45327989,
"login": "marko1616",
"node_id": "MDQ6VXNlcjQ1MzI3OTg5",
"organizations_url": "https://api.github.com/users/marko1616/orgs",
"received_events_url": "https://api.github.com/users/marko1616/received_events",
"repos_url": "https://api.github.com/users/marko1616/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marko1616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marko1616/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marko1616"
} | [] | open | false | null | [] | null | [
"By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dict(data)\r\n\r\n# Mapping function to convert label to string\r\ndef add_one(example):\r\n example['label'] += 1\r\n return example\r\n\r\n# Applying the mapping function\r\ndataset = dataset.map(add_one)\r\n\r\n# Iterating over the dataset to show results\r\nfor item in dataset:\r\n print(item)\r\n print(type(item['label']))\r\n```",
"Hello, thanks for submitting an issue.\r\n\r\nFWIU, the issue is that `datasets` tries to limit casting [ref](https://github.com/huggingface/datasets/blob/ca58154bba185c1916ca5eea4e33b27258642044/src/datasets/arrow_writer.py#L526) and as such will try to convert your strings back to int to preserve the `Features`. \r\n\r\nA quick solution would be to use `dataset.cast` or to supply `features` when calling `dataset.map`.\r\n\r\n\r\n```python\r\n# using Dataset.cast\r\ndataset = dataset.cast_column('label', Value('string'))\r\n\r\n# Alternative, supply features\r\ndataset = dataset.map(add_one, features=Features({**dataset.features, 'label': Value('string')}))\r\n```",
"LGTM! Thanks for the review.\r\n\r\nJust to clarify, is this intended behavior, or is it something that might be addressed in a future update?\r\nI'll leave this issue open until it's fixed if this is not the intended behavior."
] | 2024-09-03T16:37:01 | 2024-09-05T14:09:05 | null | NONE | null | null | null | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string.
## Reproduction Code
Below is a Python script that demonstrates the problem:
```python
from datasets import Dataset
# Original data
data = {
'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],
'label': [0, 1, 0, 1, 1, 0]
}
# Creating a Dataset object
dataset = Dataset.from_dict(data)
# Mapping function to convert label to string
def add_one(example):
example['label'] = str(example['label'])
return example
# Applying the mapping function
dataset = dataset.map(add_one)
# Iterating over the dataset to show results
for item in dataset:
print(item)
print(type(item['label']))
```
## Expected Output
After applying the mapping function, the expected output should have the `label` field as strings:
```plaintext
{'text': 'Hello', 'label': '0'}
<class 'str'>
{'text': 'world', 'label': '1'}
<class 'str'>
{'text': 'this', 'label': '0'}
<class 'str'>
{'text': 'is', 'label': '1'}
<class 'str'>
{'text': 'a', 'label': '1'}
<class 'str'>
{'text': 'test', 'label': '0'}
<class 'str'>
```
## Actual Output
The actual output still shows the `label` field values as integers:
```plaintext
{'text': 'Hello', 'label': 0}
<class 'int'>
{'text': 'world', 'label': 1}
<class 'int'>
{'text': 'this', 'label': 0}
<class 'int'>
{'text': 'is', 'label': 1}
<class 'int'>
{'text': 'a', 'label': 1}
<class 'int'>
{'text': 'test', 'label': 0}
<class 'int'>
```
## Why necessary
In the case of Image process we often need to convert PIL to tensor with same column name.
Thank for every dev who review this issue. π€ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7135/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7134/comments | https://api.github.com/repos/huggingface/datasets/issues/7134/events | https://github.com/huggingface/datasets/issues/7134 | 2,499,484,041 | I_kwDODunzps6U-xmJ | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | {
"avatar_url": "https://avatars.githubusercontent.com/u/46371349?v=4",
"events_url": "https://api.github.com/users/navidmafi/events{/privacy}",
"followers_url": "https://api.github.com/users/navidmafi/followers",
"following_url": "https://api.github.com/users/navidmafi/following{/other_user}",
"gists_url": "https://api.github.com/users/navidmafi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/navidmafi",
"id": 46371349,
"login": "navidmafi",
"node_id": "MDQ6VXNlcjQ2MzcxMzQ5",
"organizations_url": "https://api.github.com/users/navidmafi/orgs",
"received_events_url": "https://api.github.com/users/navidmafi/received_events",
"repos_url": "https://api.github.com/users/navidmafi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/navidmafi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navidmafi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/navidmafi"
} | [] | open | false | null | [] | null | [] | 2024-09-01T13:55:41 | 2024-09-02T10:34:53 | null | NONE | null | null | null | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS.
I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM.
### Steps to reproduce the bug
Below is a minimal example using two methods to get the desired output. Both of which don't work
```py
import tensorflow as tf
import datasets
import numpy as np
ds = datasets.load_dataset("project-sloth/captcha-images")
to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)}
ds_gray = ds.map(to_gray_pillow)
# Alternatively
ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow")
to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)}
ds_gray = ds.map(to_gray_tf)
```
### Expected behavior
I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape.
### Environment info
datasets 2.21.0
python tested with both 3.11 and 3.12
host os : linux | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7134/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7133/comments | https://api.github.com/repos/huggingface/datasets/issues/7133/events | https://github.com/huggingface/datasets/pull/7133 | 2,496,474,495 | PR_kwDODunzps557zng | 7,133 | remove filecheck to enable symlinks | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI is failing, looks like it breaks imagefolder loading.\r\n\r\nI just checked fsspec internals and maybe instead we can detect symlink by checking `islink` and `size` to make sure it's a file\r\n```python\r\nif info[\"type\"] == \"file\" or (info.get(\"islink\") and info[\"size\"])\r\n```\r\n",
"hmm actually `size` doesn't seem to filter symlinked directories, we need another way",
"Does fsspec perhaps allow resolving symlinks? Something like https://docs.python.org/3/library/pathlib.html#pathlib.Path.resolve",
"there is `info[\"destination\"]` in case of a symlink, so maybe\r\n\r\n\r\n```python\r\nif info[\"type\"] == \"file\" or (info.get(\"islink\") and info.get(\"destination\") and os.path.isfile(info[\"destination\"]))\r\n```"
] | 2024-08-30T07:36:56 | 2024-09-04T12:46:56 | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7133.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7133",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7133.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7133"
} | Enables streaming from local symlinks #7083
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7133/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7132/comments | https://api.github.com/repos/huggingface/datasets/issues/7132/events | https://github.com/huggingface/datasets/pull/7132 | 2,494,510,464 | PR_kwDODunzps551k1C | 7,132 | Fix data file module inference | {
"avatar_url": "https://avatars.githubusercontent.com/u/1714412?v=4",
"events_url": "https://api.github.com/users/HennerM/events{/privacy}",
"followers_url": "https://api.github.com/users/HennerM/followers",
"following_url": "https://api.github.com/users/HennerM/following{/other_user}",
"gists_url": "https://api.github.com/users/HennerM/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HennerM",
"id": 1714412,
"login": "HennerM",
"node_id": "MDQ6VXNlcjE3MTQ0MTI=",
"organizations_url": "https://api.github.com/users/HennerM/orgs",
"received_events_url": "https://api.github.com/users/HennerM/received_events",
"repos_url": "https://api.github.com/users/HennerM/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HennerM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HennerM/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HennerM"
} | [] | open | false | null | [] | null | [
"Hi ! datasets saved using `save_to_disk` should be loaded with `load_from_disk` ;)",
"It is convienient to just pass in a path to a local dataset or one from the hub and use the same function to load it. Is it not possible to get this fix merged in to allow this? ",
"We can modify `save_to_disk` to write the dataset in a structure supported by the Hub in this case, it's kind of a legacy function anyway"
] | 2024-08-29T13:48:16 | 2024-09-02T19:52:13 | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7132.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7132",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7132.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7132"
} | I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split.
Now when trying to load the dataset, an error is raised that not all splits have the same data format:
> ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}
This is not expected because both splits are saved as arrow files.
I did some debugging and found that this is the case because the list of data_files includes a `state.json` file.
Now this means for train split I get 10 ".arrow" and 1 ".json" file. Since datasets picks based on the most common extension this is correctly inferred as "arrow". In the test split, there is 1 .arrow and 1 .json file. Given the function description:
> It picks the module based on the most common file extension.
In case of a draw ".parquet" is the favorite, and then alphabetical order.
This is not quite true though, because in a tie the extensions are actually based on reverse-alphabetical order:
```
for (ext, _), _ in sorted(extensions_counter.items(), key=sort_key, *reverse=True*):
```
Which thus leads to the module wrongly inferred as "json", whereas it should be "arrow", matching the train split.
I first thought about adding "state.json" in the list of excluded files for the inference: https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L513. However, I think from digging into the code it looks like the right thing to do is to exclude it in the list of `data_files` to start with, because it is more of a metadata than a data file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7132/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7129/comments | https://api.github.com/repos/huggingface/datasets/issues/7129/events | https://github.com/huggingface/datasets/issues/7129 | 2,491,942,650 | I_kwDODunzps6UiAb6 | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | {
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego"
} | [] | open | false | null | [] | null | [] | 2024-08-28T12:27:48 | 2024-08-28T12:27:48 | null | NONE | null | null | null | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed π
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7129/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7128/comments | https://api.github.com/repos/huggingface/datasets/issues/7128/events | https://github.com/huggingface/datasets/issues/7128 | 2,490,274,775 | I_kwDODunzps6UbpPX | 7,128 | Filter Large Dataset Entry by Entry | {
"avatar_url": "https://avatars.githubusercontent.com/u/36057290?v=4",
"events_url": "https://api.github.com/users/QiyaoWei/events{/privacy}",
"followers_url": "https://api.github.com/users/QiyaoWei/followers",
"following_url": "https://api.github.com/users/QiyaoWei/following{/other_user}",
"gists_url": "https://api.github.com/users/QiyaoWei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QiyaoWei",
"id": 36057290,
"login": "QiyaoWei",
"node_id": "MDQ6VXNlcjM2MDU3Mjkw",
"organizations_url": "https://api.github.com/users/QiyaoWei/orgs",
"received_events_url": "https://api.github.com/users/QiyaoWei/received_events",
"repos_url": "https://api.github.com/users/QiyaoWei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QiyaoWei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiyaoWei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QiyaoWei"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2024-08-27T20:31:09 | 2024-08-27T20:31:09 | null | NONE | null | null | null | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7128/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7127/comments | https://api.github.com/repos/huggingface/datasets/issues/7127/events | https://github.com/huggingface/datasets/issues/7127 | 2,486,524,966 | I_kwDODunzps6UNVwm | 7,127 | Caching shuffles by np.random.Generator results in unintiutive behavior | {
"avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4",
"events_url": "https://api.github.com/users/el-hult/events{/privacy}",
"followers_url": "https://api.github.com/users/el-hult/followers",
"following_url": "https://api.github.com/users/el-hult/following{/other_user}",
"gists_url": "https://api.github.com/users/el-hult/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/el-hult",
"id": 11832922,
"login": "el-hult",
"node_id": "MDQ6VXNlcjExODMyOTIy",
"organizations_url": "https://api.github.com/users/el-hult/orgs",
"received_events_url": "https://api.github.com/users/el-hult/received_events",
"repos_url": "https://api.github.com/users/el-hult/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/el-hult/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/el-hult/subscriptions",
"type": "User",
"url": "https://api.github.com/users/el-hult"
} | [] | open | false | null | [] | null | [
"I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/arrow_dataset.py#L4306-L4316\r\n\r\nbecause the shuffle happens after checking the cache, the rng state won't advance if the cache is used. This is VERY confusing. Also not documented.\r\n\r\nMy proposal is that you remove the API for using a Generator, and only keep the seed-based API since that is functional and cache-compatible."
] | 2024-08-26T10:29:48 | 2024-08-26T10:35:57 | null | NONE | null | null | null | ### Describe the bug
Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles.
Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run.
The motivation is I have a deep learning loop with
```
for epoch in range(10):
for batch in dataset.shuffle(generator=generator).iter(batch_size=32):
.... # do stuff
```
where I want a new shuffling at every epoch. Instead I get the same shuffling.
### Steps to reproduce the bug
Run the code below two times.
```python
import datasets
import numpy as np
generator = np.random.default_rng(0)
ds = datasets.Dataset.from_dict(mapping={"X":range(1000)})
ds.save_to_disk("tmp")
print("First loop: ", end="")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
print("Second loop: ", end="")
ds = datasets.Dataset.load_from_disk("tmp")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
```
The output is:
```
$ python main.py
Saving the dataset (1/1 shards): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 495019.95 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840,
$ python main.py
Saving the dataset (1/1 shards): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 22243.40 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741,
```
The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output
### Expected behavior
I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling.
### Environment info
Datasets version 2.21.0
Ubuntu linux. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7127/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7126/comments | https://api.github.com/repos/huggingface/datasets/issues/7126/events | https://github.com/huggingface/datasets/pull/7126 | 2,485,939,495 | PR_kwDODunzps55Y-Ws | 7,126 | Disable implicit token in CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003428 / 0.011008 (-0.007580) | 0.062673 / 0.038508 (0.024164) | 0.030111 / 0.023109 (0.007002) | 0.238017 / 0.275898 (-0.037881) | 0.262655 / 0.323480 (-0.060825) | 0.003015 / 0.007986 (-0.004971) | 0.002664 / 0.004328 (-0.001665) | 0.050010 / 0.004250 (0.045759) | 0.045620 / 0.037052 (0.008567) | 0.251800 / 0.258489 (-0.006689) | 0.278829 / 0.293841 (-0.015011) | 0.029838 / 0.128546 (-0.098709) | 0.011703 / 0.075646 (-0.063943) | 0.204503 / 0.419271 (-0.214768) | 0.036173 / 0.043533 (-0.007359) | 0.242850 / 0.255139 (-0.012289) | 0.263811 / 0.283200 (-0.019389) | 0.019027 / 0.141683 (-0.122656) | 1.168028 / 1.452155 (-0.284126) | 1.208975 / 1.492716 (-0.283742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091309 / 0.018006 (0.073303) | 0.299583 / 0.000490 (0.299093) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018451 / 0.037411 (-0.018960) | 0.062516 / 0.014526 (0.047991) | 0.073983 / 0.176557 (-0.102573) | 0.120952 / 0.737135 (-0.616184) | 0.075275 / 0.296338 (-0.221063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286870 / 0.215209 (0.071661) | 2.810498 / 2.077655 (0.732843) | 1.490028 / 1.504120 (-0.014092) | 1.362249 / 1.541195 (-0.178946) | 1.368939 / 1.468490 (-0.099551) | 0.736643 / 4.584777 (-3.848134) | 2.414237 / 3.745712 (-1.331475) | 2.898911 / 5.269862 (-2.370951) | 1.840630 / 4.565676 (-2.725047) | 0.077872 / 0.424275 (-0.346403) | 0.005087 / 0.007607 (-0.002520) | 0.337054 / 0.226044 (0.111009) | 3.390734 / 2.268929 (1.121806) | 1.844174 / 55.444624 (-53.600451) | 1.532741 / 6.876477 (-5.343736) | 1.551650 / 2.142072 (-0.590422) | 0.778642 / 4.805227 (-4.026585) | 0.131899 / 6.500664 (-6.368765) | 0.041801 / 0.075469 (-0.033668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958362 / 1.841788 (-0.883425) | 11.323330 / 8.074308 (3.249022) | 9.396199 / 10.191392 (-0.795193) | 0.131154 / 0.680424 (-0.549270) | 0.014705 / 0.534201 (-0.519496) | 0.302424 / 0.579283 (-0.276859) | 0.261870 / 0.434364 (-0.172494) | 0.340788 / 0.540337 (-0.199550) | 0.433360 / 1.386936 (-0.953576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005571 / 0.011353 (-0.005782) | 0.003388 / 0.011008 (-0.007621) | 0.050366 / 0.038508 (0.011858) | 0.032633 / 0.023109 (0.009524) | 0.261847 / 0.275898 (-0.014051) | 0.292197 / 0.323480 (-0.031283) | 0.005070 / 0.007986 (-0.002916) | 0.002753 / 0.004328 (-0.001575) | 0.048613 / 0.004250 (0.044363) | 0.040272 / 0.037052 (0.003219) | 0.275441 / 0.258489 (0.016952) | 0.309175 / 0.293841 (0.015334) | 0.032403 / 0.128546 (-0.096143) | 0.011734 / 0.075646 (-0.063912) | 0.059532 / 0.419271 (-0.359740) | 0.033886 / 0.043533 (-0.009647) | 0.263453 / 0.255139 (0.008314) | 0.281997 / 0.283200 (-0.001203) | 0.018522 / 0.141683 (-0.123161) | 1.150364 / 1.452155 (-0.301791) | 1.204090 / 1.492716 (-0.288627) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093129 / 0.018006 (0.075123) | 0.303691 / 0.000490 (0.303201) | 0.000231 / 0.000200 (0.000031) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022084 / 0.037411 (-0.015327) | 0.076354 / 0.014526 (0.061828) | 0.087710 / 0.176557 (-0.088847) | 0.128907 / 0.737135 (-0.608228) | 0.088603 / 0.296338 (-0.207735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301161 / 0.215209 (0.085952) | 2.954780 / 2.077655 (0.877125) | 1.601366 / 1.504120 (0.097246) | 1.477225 / 1.541195 (-0.063970) | 1.482355 / 1.468490 (0.013865) | 0.722461 / 4.584777 (-3.862315) | 0.981439 / 3.745712 (-2.764273) | 2.927006 / 5.269862 (-2.342856) | 1.884444 / 4.565676 (-2.681233) | 0.079044 / 0.424275 (-0.345231) | 0.005530 / 0.007607 (-0.002077) | 0.347082 / 0.226044 (0.121037) | 3.491984 / 2.268929 (1.223056) | 1.944317 / 55.444624 (-53.500307) | 1.645792 / 6.876477 (-5.230685) | 1.649506 / 2.142072 (-0.492567) | 0.800822 / 4.805227 (-4.004405) | 0.133936 / 6.500664 (-6.366729) | 0.041198 / 0.075469 (-0.034271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029764 / 1.841788 (-0.812024) | 11.928840 / 8.074308 (3.854532) | 10.021390 / 10.191392 (-0.170002) | 0.141608 / 0.680424 (-0.538816) | 0.014921 / 0.534201 (-0.519280) | 0.302050 / 0.579283 (-0.277233) | 0.124151 / 0.434364 (-0.310213) | 0.347143 / 0.540337 (-0.193195) | 0.467649 / 1.386936 (-0.919287) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4c87a6bf57b3aa094c28895c5b89b91b3509c58 \"CML watermark\")\n"
] | 2024-08-26T05:29:46 | 2024-08-26T06:05:01 | 2024-08-26T05:59:15 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7126",
"merged_at": "2024-08-26T05:59:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7126"
} | Disable implicit token in CI.
This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in:
- #7124 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7126/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7125/comments | https://api.github.com/repos/huggingface/datasets/issues/7125/events | https://github.com/huggingface/datasets/pull/7125 | 2,485,912,246 | PR_kwDODunzps55Y4TM | 7,125 | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005741 / 0.011353 (-0.005612) | 0.004011 / 0.011008 (-0.006998) | 0.063962 / 0.038508 (0.025454) | 0.031512 / 0.023109 (0.008403) | 0.242249 / 0.275898 (-0.033649) | 0.269601 / 0.323480 (-0.053879) | 0.004502 / 0.007986 (-0.003483) | 0.002835 / 0.004328 (-0.001494) | 0.049878 / 0.004250 (0.045628) | 0.048012 / 0.037052 (0.010959) | 0.250454 / 0.258489 (-0.008035) | 0.283266 / 0.293841 (-0.010575) | 0.030752 / 0.128546 (-0.097794) | 0.012655 / 0.075646 (-0.062991) | 0.211043 / 0.419271 (-0.208229) | 0.037165 / 0.043533 (-0.006367) | 0.246815 / 0.255139 (-0.008324) | 0.264306 / 0.283200 (-0.018893) | 0.018343 / 0.141683 (-0.123340) | 1.140452 / 1.452155 (-0.311702) | 1.214849 / 1.492716 (-0.277867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098048 / 0.018006 (0.080042) | 0.292201 / 0.000490 (0.291712) | 0.000217 / 0.000200 (0.000017) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018732 / 0.037411 (-0.018679) | 0.062887 / 0.014526 (0.048361) | 0.074353 / 0.176557 (-0.102204) | 0.120794 / 0.737135 (-0.616341) | 0.077066 / 0.296338 (-0.219272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276335 / 0.215209 (0.061126) | 2.722905 / 2.077655 (0.645250) | 1.423080 / 1.504120 (-0.081040) | 1.305443 / 1.541195 (-0.235752) | 1.342142 / 1.468490 (-0.126348) | 0.741899 / 4.584777 (-3.842878) | 2.407567 / 3.745712 (-1.338145) | 3.070263 / 5.269862 (-2.199599) | 1.935732 / 4.565676 (-2.629944) | 0.081371 / 0.424275 (-0.342904) | 0.005207 / 0.007607 (-0.002401) | 0.328988 / 0.226044 (0.102943) | 3.240771 / 2.268929 (0.971842) | 1.801028 / 55.444624 (-53.643597) | 1.490593 / 6.876477 (-5.385884) | 1.521317 / 2.142072 (-0.620756) | 0.794051 / 4.805227 (-4.011176) | 0.136398 / 6.500664 (-6.364266) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974186 / 1.841788 (-0.867602) | 12.280011 / 8.074308 (4.205703) | 9.453389 / 10.191392 (-0.738003) | 0.132627 / 0.680424 (-0.547797) | 0.014608 / 0.534201 (-0.519593) | 0.309298 / 0.579283 (-0.269985) | 0.275911 / 0.434364 (-0.158452) | 0.348261 / 0.540337 (-0.192077) | 0.439031 / 1.386936 (-0.947905) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006248 / 0.011353 (-0.005105) | 0.004369 / 0.011008 (-0.006639) | 0.050588 / 0.038508 (0.012080) | 0.032880 / 0.023109 (0.009771) | 0.268979 / 0.275898 (-0.006919) | 0.294714 / 0.323480 (-0.028766) | 0.004518 / 0.007986 (-0.003467) | 0.002995 / 0.004328 (-0.001333) | 0.048776 / 0.004250 (0.044525) | 0.041696 / 0.037052 (0.004644) | 0.283413 / 0.258489 (0.024924) | 0.322137 / 0.293841 (0.028296) | 0.032809 / 0.128546 (-0.095737) | 0.012559 / 0.075646 (-0.063087) | 0.060456 / 0.419271 (-0.358815) | 0.034564 / 0.043533 (-0.008968) | 0.267263 / 0.255139 (0.012124) | 0.292633 / 0.283200 (0.009434) | 0.019011 / 0.141683 (-0.122672) | 1.199820 / 1.452155 (-0.252335) | 1.251829 / 1.492716 (-0.240887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097615 / 0.018006 (0.079609) | 0.313764 / 0.000490 (0.313274) | 0.000220 / 0.000200 (0.000020) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.089301 / 0.014526 (0.074775) | 0.092964 / 0.176557 (-0.083592) | 0.131724 / 0.737135 (-0.605412) | 0.094792 / 0.296338 (-0.201546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305119 / 0.215209 (0.089910) | 2.932192 / 2.077655 (0.854537) | 1.610573 / 1.504120 (0.106453) | 1.487502 / 1.541195 (-0.053693) | 1.533300 / 1.468490 (0.064810) | 0.717223 / 4.584777 (-3.867554) | 0.964402 / 3.745712 (-2.781310) | 3.111398 / 5.269862 (-2.158464) | 1.957942 / 4.565676 (-2.607734) | 0.079160 / 0.424275 (-0.345116) | 0.005639 / 0.007607 (-0.001968) | 0.358971 / 0.226044 (0.132927) | 3.564401 / 2.268929 (1.295472) | 2.043079 / 55.444624 (-53.401546) | 1.742681 / 6.876477 (-5.133795) | 1.784758 / 2.142072 (-0.357314) | 0.798508 / 4.805227 (-4.006719) | 0.133905 / 6.500664 (-6.366759) | 0.043008 / 0.075469 (-0.032461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031715 / 1.841788 (-0.810073) | 13.374312 / 8.074308 (5.300004) | 10.789098 / 10.191392 (0.597706) | 0.133663 / 0.680424 (-0.546761) | 0.016692 / 0.534201 (-0.517509) | 0.304716 / 0.579283 (-0.274567) | 0.129074 / 0.434364 (-0.305290) | 0.346440 / 0.540337 (-0.193897) | 0.464593 / 1.386936 (-0.922343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#880a52cea337032d39e90e6f0dcc55198a75a285 \"CML watermark\")\n"
] | 2024-08-26T05:09:35 | 2024-08-26T05:33:15 | 2024-08-26T05:27:09 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7125",
"merged_at": "2024-08-26T05:27:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7125"
} | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7125/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7124/comments | https://api.github.com/repos/huggingface/datasets/issues/7124/events | https://github.com/huggingface/datasets/pull/7124 | 2,485,890,442 | PR_kwDODunzps55YzWr | 7,124 | Test get_dataset_config_info with non-existing/gated/private dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005339 / 0.011353 (-0.006014) | 0.003640 / 0.011008 (-0.007368) | 0.064012 / 0.038508 (0.025504) | 0.030424 / 0.023109 (0.007314) | 0.239966 / 0.275898 (-0.035932) | 0.264361 / 0.323480 (-0.059119) | 0.004247 / 0.007986 (-0.003739) | 0.002847 / 0.004328 (-0.001481) | 0.049640 / 0.004250 (0.045390) | 0.044903 / 0.037052 (0.007851) | 0.250174 / 0.258489 (-0.008315) | 0.281423 / 0.293841 (-0.012418) | 0.029419 / 0.128546 (-0.099127) | 0.012221 / 0.075646 (-0.063426) | 0.205907 / 0.419271 (-0.213365) | 0.036654 / 0.043533 (-0.006878) | 0.245805 / 0.255139 (-0.009334) | 0.265029 / 0.283200 (-0.018170) | 0.018081 / 0.141683 (-0.123602) | 1.113831 / 1.452155 (-0.338324) | 1.156443 / 1.492716 (-0.336274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134389 / 0.018006 (0.116383) | 0.300637 / 0.000490 (0.300147) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019111 / 0.037411 (-0.018300) | 0.062585 / 0.014526 (0.048059) | 0.075909 / 0.176557 (-0.100647) | 0.121382 / 0.737135 (-0.615753) | 0.074980 / 0.296338 (-0.221359) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285062 / 0.215209 (0.069853) | 2.850130 / 2.077655 (0.772476) | 1.519877 / 1.504120 (0.015757) | 1.388711 / 1.541195 (-0.152484) | 1.397284 / 1.468490 (-0.071206) | 0.723100 / 4.584777 (-3.861677) | 2.393184 / 3.745712 (-1.352529) | 2.908418 / 5.269862 (-2.361443) | 1.871024 / 4.565676 (-2.694653) | 0.078230 / 0.424275 (-0.346045) | 0.005158 / 0.007607 (-0.002449) | 0.345622 / 0.226044 (0.119577) | 3.357611 / 2.268929 (1.088683) | 1.844492 / 55.444624 (-53.600132) | 1.584237 / 6.876477 (-5.292240) | 1.577158 / 2.142072 (-0.564915) | 0.789702 / 4.805227 (-4.015525) | 0.132045 / 6.500664 (-6.368619) | 0.042304 / 0.075469 (-0.033165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977166 / 1.841788 (-0.864622) | 11.306118 / 8.074308 (3.231810) | 9.490778 / 10.191392 (-0.700614) | 0.143536 / 0.680424 (-0.536888) | 0.015304 / 0.534201 (-0.518897) | 0.313892 / 0.579283 (-0.265391) | 0.267009 / 0.434364 (-0.167355) | 0.345560 / 0.540337 (-0.194778) | 0.435649 / 1.386936 (-0.951287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005700 / 0.011353 (-0.005653) | 0.003490 / 0.011008 (-0.007519) | 0.049990 / 0.038508 (0.011482) | 0.032070 / 0.023109 (0.008961) | 0.272622 / 0.275898 (-0.003276) | 0.298265 / 0.323480 (-0.025215) | 0.004379 / 0.007986 (-0.003606) | 0.002786 / 0.004328 (-0.001543) | 0.048271 / 0.004250 (0.044020) | 0.040102 / 0.037052 (0.003050) | 0.286433 / 0.258489 (0.027944) | 0.319306 / 0.293841 (0.025465) | 0.032872 / 0.128546 (-0.095675) | 0.011870 / 0.075646 (-0.063776) | 0.059886 / 0.419271 (-0.359385) | 0.034281 / 0.043533 (-0.009252) | 0.275588 / 0.255139 (0.020450) | 0.292951 / 0.283200 (0.009751) | 0.018095 / 0.141683 (-0.123588) | 1.130870 / 1.452155 (-0.321285) | 1.190761 / 1.492716 (-0.301955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093346 / 0.018006 (0.075340) | 0.307506 / 0.000490 (0.307016) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022873 / 0.037411 (-0.014538) | 0.077070 / 0.014526 (0.062544) | 0.089152 / 0.176557 (-0.087404) | 0.130186 / 0.737135 (-0.606949) | 0.090244 / 0.296338 (-0.206095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297950 / 0.215209 (0.082740) | 2.942360 / 2.077655 (0.864705) | 1.614324 / 1.504120 (0.110204) | 1.495795 / 1.541195 (-0.045400) | 1.506155 / 1.468490 (0.037665) | 0.730307 / 4.584777 (-3.854470) | 0.966312 / 3.745712 (-2.779400) | 2.928955 / 5.269862 (-2.340906) | 1.940049 / 4.565676 (-2.625627) | 0.079589 / 0.424275 (-0.344686) | 0.006004 / 0.007607 (-0.001604) | 0.356630 / 0.226044 (0.130585) | 3.516652 / 2.268929 (1.247724) | 1.963196 / 55.444624 (-53.481429) | 1.674489 / 6.876477 (-5.201988) | 1.677558 / 2.142072 (-0.464514) | 0.806447 / 4.805227 (-3.998780) | 0.133819 / 6.500664 (-6.366845) | 0.040762 / 0.075469 (-0.034707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038495 / 1.841788 (-0.803293) | 11.829186 / 8.074308 (3.754878) | 10.214158 / 10.191392 (0.022766) | 0.140590 / 0.680424 (-0.539834) | 0.014729 / 0.534201 (-0.519472) | 0.300557 / 0.579283 (-0.278726) | 0.122772 / 0.434364 (-0.311592) | 0.344618 / 0.540337 (-0.195720) | 0.460064 / 1.386936 (-0.926872) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be5cff059a2a5b89d7a97bc04739c4919ab8089f \"CML watermark\")\n"
] | 2024-08-26T04:53:59 | 2024-08-26T06:15:33 | 2024-08-26T06:09:42 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7124.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7124",
"merged_at": "2024-08-26T06:09:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7124.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7124"
} | Test get_dataset_config_info with non-existing/gated/private dataset.
Related to:
- #7109
See also:
- https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7124/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7123/comments | https://api.github.com/repos/huggingface/datasets/issues/7123/events | https://github.com/huggingface/datasets/issues/7123 | 2,484,003,937 | I_kwDODunzps6UDuRh | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | {
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2024-08-23T22:56:01 | 2024-08-23T23:01:42 | null | NONE | null | null | null | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed.
### Motivation
When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)).
It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).
### Your contribution
I can make a suggestion for one approach to address the issue:
For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).
Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?
```
configs:
- config_name: <image subset>
data_files:
- <image-metadata>.csv
- <path/to/images>/*.jpg
```
I'd also be happy to look at whatever solution is decided upon and contribute to the ideation.
Thanks for your time and consideration! The dataset viewer really is fabulous when it works :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7123/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7122/comments | https://api.github.com/repos/huggingface/datasets/issues/7122/events | https://github.com/huggingface/datasets/issues/7122 | 2,482,491,258 | I_kwDODunzps6T9896 | 7,122 | [interleave_dataset] sample batches from a single source at a time | {
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/memray",
"id": 4197249,
"login": "memray",
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"organizations_url": "https://api.github.com/users/memray/orgs",
"received_events_url": "https://api.github.com/users/memray/received_events",
"repos_url": "https://api.github.com/users/memray/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"type": "User",
"url": "https://api.github.com/users/memray"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2024-08-23T07:21:15 | 2024-08-23T07:21:15 | null | NONE | null | null | null | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7122/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7121/comments | https://api.github.com/repos/huggingface/datasets/issues/7121/events | https://github.com/huggingface/datasets/pull/7121 | 2,480,978,483 | PR_kwDODunzps55Iukl | 7,121 | Fix typed examples iterable state dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7121). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005273 / 0.011353 (-0.006079) | 0.003789 / 0.011008 (-0.007219) | 0.062811 / 0.038508 (0.024303) | 0.031055 / 0.023109 (0.007946) | 0.238663 / 0.275898 (-0.037235) | 0.269706 / 0.323480 (-0.053774) | 0.004105 / 0.007986 (-0.003881) | 0.002781 / 0.004328 (-0.001547) | 0.048800 / 0.004250 (0.044549) | 0.045759 / 0.037052 (0.008707) | 0.260467 / 0.258489 (0.001978) | 0.288800 / 0.293841 (-0.005041) | 0.029341 / 0.128546 (-0.099205) | 0.012413 / 0.075646 (-0.063233) | 0.203493 / 0.419271 (-0.215778) | 0.037270 / 0.043533 (-0.006263) | 0.246130 / 0.255139 (-0.009009) | 0.269046 / 0.283200 (-0.014154) | 0.017788 / 0.141683 (-0.123895) | 1.175537 / 1.452155 (-0.276617) | 1.197909 / 1.492716 (-0.294808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098258 / 0.018006 (0.080251) | 0.305283 / 0.000490 (0.304794) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019066 / 0.037411 (-0.018345) | 0.062723 / 0.014526 (0.048197) | 0.075827 / 0.176557 (-0.100730) | 0.121371 / 0.737135 (-0.615764) | 0.075167 / 0.296338 (-0.221171) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296650 / 0.215209 (0.081441) | 2.910593 / 2.077655 (0.832939) | 1.510798 / 1.504120 (0.006678) | 1.375461 / 1.541195 (-0.165733) | 1.386423 / 1.468490 (-0.082067) | 0.743818 / 4.584777 (-3.840959) | 2.437848 / 3.745712 (-1.307864) | 2.943661 / 5.269862 (-2.326201) | 1.888977 / 4.565676 (-2.676699) | 0.080126 / 0.424275 (-0.344149) | 0.005168 / 0.007607 (-0.002439) | 0.348699 / 0.226044 (0.122654) | 3.477686 / 2.268929 (1.208758) | 1.901282 / 55.444624 (-53.543343) | 1.574847 / 6.876477 (-5.301629) | 1.594359 / 2.142072 (-0.547714) | 0.793415 / 4.805227 (-4.011812) | 0.133982 / 6.500664 (-6.366682) | 0.042435 / 0.075469 (-0.033034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963057 / 1.841788 (-0.878731) | 11.597217 / 8.074308 (3.522909) | 9.285172 / 10.191392 (-0.906220) | 0.130510 / 0.680424 (-0.549914) | 0.013964 / 0.534201 (-0.520237) | 0.299334 / 0.579283 (-0.279949) | 0.267775 / 0.434364 (-0.166589) | 0.336922 / 0.540337 (-0.203416) | 0.430493 / 1.386936 (-0.956443) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003941 / 0.011008 (-0.007067) | 0.050204 / 0.038508 (0.011696) | 0.032275 / 0.023109 (0.009166) | 0.271076 / 0.275898 (-0.004822) | 0.295565 / 0.323480 (-0.027914) | 0.004393 / 0.007986 (-0.003592) | 0.002881 / 0.004328 (-0.001447) | 0.048032 / 0.004250 (0.043782) | 0.040430 / 0.037052 (0.003378) | 0.281631 / 0.258489 (0.023142) | 0.317964 / 0.293841 (0.024124) | 0.032318 / 0.128546 (-0.096228) | 0.012348 / 0.075646 (-0.063298) | 0.060336 / 0.419271 (-0.358936) | 0.034148 / 0.043533 (-0.009385) | 0.273803 / 0.255139 (0.018664) | 0.292068 / 0.283200 (0.008868) | 0.018693 / 0.141683 (-0.122990) | 1.155704 / 1.452155 (-0.296451) | 1.192245 / 1.492716 (-0.300472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097588 / 0.018006 (0.079582) | 0.311760 / 0.000490 (0.311270) | 0.000232 / 0.000200 (0.000032) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022825 / 0.037411 (-0.014586) | 0.077698 / 0.014526 (0.063172) | 0.088567 / 0.176557 (-0.087989) | 0.129689 / 0.737135 (-0.607446) | 0.090626 / 0.296338 (-0.205712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299791 / 0.215209 (0.084582) | 2.978558 / 2.077655 (0.900903) | 1.594095 / 1.504120 (0.089975) | 1.468476 / 1.541195 (-0.072719) | 1.482880 / 1.468490 (0.014390) | 0.717553 / 4.584777 (-3.867224) | 0.977501 / 3.745712 (-2.768211) | 2.954289 / 5.269862 (-2.315572) | 1.895473 / 4.565676 (-2.670203) | 0.078452 / 0.424275 (-0.345824) | 0.005508 / 0.007607 (-0.002099) | 0.350882 / 0.226044 (0.124837) | 3.480878 / 2.268929 (1.211949) | 1.965240 / 55.444624 (-53.479385) | 1.672448 / 6.876477 (-5.204029) | 1.674319 / 2.142072 (-0.467753) | 0.789049 / 4.805227 (-4.016178) | 0.132715 / 6.500664 (-6.367949) | 0.041081 / 0.075469 (-0.034388) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022953 / 1.841788 (-0.818834) | 12.123349 / 8.074308 (4.049041) | 10.336115 / 10.191392 (0.144723) | 0.142233 / 0.680424 (-0.538191) | 0.015416 / 0.534201 (-0.518785) | 0.303088 / 0.579283 (-0.276195) | 0.124942 / 0.434364 (-0.309422) | 0.338454 / 0.540337 (-0.201883) | 0.460039 / 1.386936 (-0.926897) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3813ce846e52824b38e53895810682f0a496a2e3 \"CML watermark\")\n"
] | 2024-08-22T14:45:03 | 2024-08-22T14:54:56 | 2024-08-22T14:49:06 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7121",
"merged_at": "2024-08-22T14:49:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7121"
} | fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7120/comments | https://api.github.com/repos/huggingface/datasets/issues/7120/events | https://github.com/huggingface/datasets/pull/7120 | 2,480,674,237 | PR_kwDODunzps55HrBy | 7,120 | don't mention the script if trust_remote_code=False | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7120). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Note that in this case, we could even expect this kind of message:\r\n\r\n```\r\nDataFilesNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv'\r\n```\r\n\r\nWe generally return `DataFilesNotFoundError` for this case (data files passed as an argument), not sure why it does not occur with this dataset.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005484 / 0.011353 (-0.005869) | 0.003932 / 0.011008 (-0.007077) | 0.063177 / 0.038508 (0.024669) | 0.031311 / 0.023109 (0.008202) | 0.254881 / 0.275898 (-0.021017) | 0.273818 / 0.323480 (-0.049662) | 0.003312 / 0.007986 (-0.004674) | 0.003251 / 0.004328 (-0.001078) | 0.049307 / 0.004250 (0.045057) | 0.046189 / 0.037052 (0.009137) | 0.268182 / 0.258489 (0.009693) | 0.303659 / 0.293841 (0.009818) | 0.029312 / 0.128546 (-0.099234) | 0.013649 / 0.075646 (-0.061997) | 0.204240 / 0.419271 (-0.215032) | 0.036607 / 0.043533 (-0.006926) | 0.252232 / 0.255139 (-0.002907) | 0.271960 / 0.283200 (-0.011239) | 0.018043 / 0.141683 (-0.123640) | 1.148601 / 1.452155 (-0.303553) | 1.212313 / 1.492716 (-0.280403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096354 / 0.018006 (0.078348) | 0.302575 / 0.000490 (0.302085) | 0.000246 / 0.000200 (0.000046) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019023 / 0.037411 (-0.018389) | 0.064821 / 0.014526 (0.050295) | 0.077046 / 0.176557 (-0.099510) | 0.122896 / 0.737135 (-0.614239) | 0.078300 / 0.296338 (-0.218038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283681 / 0.215209 (0.068472) | 2.801473 / 2.077655 (0.723818) | 1.505611 / 1.504120 (0.001491) | 1.385832 / 1.541195 (-0.155363) | 1.430284 / 1.468490 (-0.038206) | 0.752041 / 4.584777 (-3.832736) | 2.406138 / 3.745712 (-1.339574) | 2.941370 / 5.269862 (-2.328492) | 1.887681 / 4.565676 (-2.677996) | 0.078693 / 0.424275 (-0.345582) | 0.005266 / 0.007607 (-0.002341) | 0.336484 / 0.226044 (0.110440) | 3.372262 / 2.268929 (1.103334) | 1.861541 / 55.444624 (-53.583084) | 1.572782 / 6.876477 (-5.303694) | 1.592387 / 2.142072 (-0.549685) | 0.796557 / 4.805227 (-4.008670) | 0.134923 / 6.500664 (-6.365741) | 0.043007 / 0.075469 (-0.032462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982690 / 1.841788 (-0.859097) | 11.700213 / 8.074308 (3.625905) | 9.122642 / 10.191392 (-1.068750) | 0.141430 / 0.680424 (-0.538994) | 0.014971 / 0.534201 (-0.519230) | 0.300938 / 0.579283 (-0.278345) | 0.268315 / 0.434364 (-0.166049) | 0.339891 / 0.540337 (-0.200447) | 0.428302 / 1.386936 (-0.958634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005732 / 0.011353 (-0.005621) | 0.003905 / 0.011008 (-0.007103) | 0.049900 / 0.038508 (0.011392) | 0.032255 / 0.023109 (0.009145) | 0.267929 / 0.275898 (-0.007969) | 0.295595 / 0.323480 (-0.027885) | 0.004437 / 0.007986 (-0.003549) | 0.003008 / 0.004328 (-0.001321) | 0.048357 / 0.004250 (0.044107) | 0.040118 / 0.037052 (0.003066) | 0.282859 / 0.258489 (0.024370) | 0.319243 / 0.293841 (0.025402) | 0.032793 / 0.128546 (-0.095754) | 0.012091 / 0.075646 (-0.063555) | 0.060082 / 0.419271 (-0.359189) | 0.034426 / 0.043533 (-0.009107) | 0.273668 / 0.255139 (0.018529) | 0.292110 / 0.283200 (0.008910) | 0.019002 / 0.141683 (-0.122680) | 1.165850 / 1.452155 (-0.286304) | 1.209195 / 1.492716 (-0.283521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099267 / 0.018006 (0.081261) | 0.316746 / 0.000490 (0.316256) | 0.000267 / 0.000200 (0.000067) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023117 / 0.037411 (-0.014294) | 0.076691 / 0.014526 (0.062165) | 0.092190 / 0.176557 (-0.084367) | 0.130620 / 0.737135 (-0.606515) | 0.091068 / 0.296338 (-0.205271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296419 / 0.215209 (0.081210) | 2.933964 / 2.077655 (0.856309) | 1.595015 / 1.504120 (0.090895) | 1.467610 / 1.541195 (-0.073585) | 1.487386 / 1.468490 (0.018896) | 0.730927 / 4.584777 (-3.853850) | 0.971276 / 3.745712 (-2.774436) | 2.969735 / 5.269862 (-2.300127) | 1.916126 / 4.565676 (-2.649550) | 0.078863 / 0.424275 (-0.345412) | 0.005506 / 0.007607 (-0.002101) | 0.345191 / 0.226044 (0.119147) | 3.407481 / 2.268929 (1.138553) | 1.955966 / 55.444624 (-53.488659) | 1.677365 / 6.876477 (-5.199112) | 1.716052 / 2.142072 (-0.426020) | 0.797208 / 4.805227 (-4.008020) | 0.132853 / 6.500664 (-6.367811) | 0.041691 / 0.075469 (-0.033778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.042331 / 1.841788 (-0.799456) | 12.186080 / 8.074308 (4.111772) | 10.288961 / 10.191392 (0.097569) | 0.141897 / 0.680424 (-0.538526) | 0.015321 / 0.534201 (-0.518880) | 0.308302 / 0.579283 (-0.270981) | 0.123292 / 0.434364 (-0.311072) | 0.348515 / 0.540337 (-0.191823) | 0.473045 / 1.386936 (-0.913891) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cedffa52879ebc5e4df43f0bcf8660ee7229f0dc \"CML watermark\")\n"
] | 2024-08-22T12:32:32 | 2024-08-22T14:39:52 | 2024-08-22T14:33:52 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7120.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7120",
"merged_at": "2024-08-22T14:33:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7120.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7120"
} | See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is:
```
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't find 'Omega02gdfdd/bioclip-demo-zero-shot-mistakes' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
```
The issue there is that a `configs` parameter is set in the README, while the mentioned data file (`data.csv`) does not exist. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7120/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7119/comments | https://api.github.com/repos/huggingface/datasets/issues/7119/events | https://github.com/huggingface/datasets/pull/7119 | 2,477,766,493 | PR_kwDODunzps54-GjY | 7,119 | Install transformers with numpy-2 CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005156 / 0.011353 (-0.006197) | 0.003365 / 0.011008 (-0.007643) | 0.063451 / 0.038508 (0.024943) | 0.029510 / 0.023109 (0.006401) | 0.244825 / 0.275898 (-0.031074) | 0.265157 / 0.323480 (-0.058323) | 0.004239 / 0.007986 (-0.003747) | 0.002732 / 0.004328 (-0.001596) | 0.050412 / 0.004250 (0.046162) | 0.043608 / 0.037052 (0.006556) | 0.256635 / 0.258489 (-0.001854) | 0.277472 / 0.293841 (-0.016369) | 0.029329 / 0.128546 (-0.099217) | 0.012318 / 0.075646 (-0.063329) | 0.204751 / 0.419271 (-0.214520) | 0.036468 / 0.043533 (-0.007065) | 0.246773 / 0.255139 (-0.008366) | 0.263932 / 0.283200 (-0.019268) | 0.017053 / 0.141683 (-0.124629) | 1.173249 / 1.452155 (-0.278905) | 1.234186 / 1.492716 (-0.258531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092398 / 0.018006 (0.074391) | 0.309473 / 0.000490 (0.308983) | 0.000220 / 0.000200 (0.000020) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018553 / 0.037411 (-0.018858) | 0.062546 / 0.014526 (0.048020) | 0.073943 / 0.176557 (-0.102613) | 0.120498 / 0.737135 (-0.616638) | 0.075185 / 0.296338 (-0.221153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296899 / 0.215209 (0.081690) | 2.919088 / 2.077655 (0.841433) | 1.533146 / 1.504120 (0.029026) | 1.395441 / 1.541195 (-0.145754) | 1.399089 / 1.468490 (-0.069401) | 0.742750 / 4.584777 (-3.842027) | 2.390317 / 3.745712 (-1.355395) | 2.883166 / 5.269862 (-2.386695) | 1.854003 / 4.565676 (-2.711674) | 0.077140 / 0.424275 (-0.347136) | 0.005176 / 0.007607 (-0.002432) | 0.349391 / 0.226044 (0.123347) | 3.466043 / 2.268929 (1.197114) | 1.870619 / 55.444624 (-53.574005) | 1.559173 / 6.876477 (-5.317303) | 1.605480 / 2.142072 (-0.536592) | 0.786753 / 4.805227 (-4.018474) | 0.134869 / 6.500664 (-6.365795) | 0.042176 / 0.075469 (-0.033293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954256 / 1.841788 (-0.887532) | 11.194758 / 8.074308 (3.120449) | 9.129670 / 10.191392 (-1.061722) | 0.138318 / 0.680424 (-0.542106) | 0.014299 / 0.534201 (-0.519902) | 0.303704 / 0.579283 (-0.275579) | 0.262513 / 0.434364 (-0.171851) | 0.346539 / 0.540337 (-0.193798) | 0.429524 / 1.386936 (-0.957412) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005692 / 0.011353 (-0.005661) | 0.003423 / 0.011008 (-0.007586) | 0.050618 / 0.038508 (0.012110) | 0.031053 / 0.023109 (0.007944) | 0.275901 / 0.275898 (0.000003) | 0.294404 / 0.323480 (-0.029076) | 0.004303 / 0.007986 (-0.003682) | 0.002728 / 0.004328 (-0.001600) | 0.049757 / 0.004250 (0.045507) | 0.039997 / 0.037052 (0.002945) | 0.287291 / 0.258489 (0.028802) | 0.319186 / 0.293841 (0.025345) | 0.032558 / 0.128546 (-0.095988) | 0.012088 / 0.075646 (-0.063558) | 0.060746 / 0.419271 (-0.358525) | 0.034046 / 0.043533 (-0.009486) | 0.276170 / 0.255139 (0.021031) | 0.293673 / 0.283200 (0.010474) | 0.018018 / 0.141683 (-0.123665) | 1.158453 / 1.452155 (-0.293701) | 1.198599 / 1.492716 (-0.294118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093134 / 0.018006 (0.075127) | 0.304511 / 0.000490 (0.304021) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022991 / 0.037411 (-0.014421) | 0.077548 / 0.014526 (0.063022) | 0.087887 / 0.176557 (-0.088670) | 0.131786 / 0.737135 (-0.605349) | 0.088747 / 0.296338 (-0.207591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302811 / 0.215209 (0.087602) | 2.959276 / 2.077655 (0.881621) | 1.591348 / 1.504120 (0.087229) | 1.464731 / 1.541195 (-0.076464) | 1.474112 / 1.468490 (0.005622) | 0.741573 / 4.584777 (-3.843204) | 0.959229 / 3.745712 (-2.786483) | 2.895750 / 5.269862 (-2.374111) | 1.896051 / 4.565676 (-2.669625) | 0.079012 / 0.424275 (-0.345264) | 0.005494 / 0.007607 (-0.002113) | 0.355699 / 0.226044 (0.129655) | 3.524833 / 2.268929 (1.255905) | 1.972358 / 55.444624 (-53.472266) | 1.667249 / 6.876477 (-5.209228) | 1.658635 / 2.142072 (-0.483438) | 0.813184 / 4.805227 (-3.992044) | 0.134226 / 6.500664 (-6.366438) | 0.041087 / 0.075469 (-0.034382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038963 / 1.841788 (-0.802824) | 11.785835 / 8.074308 (3.711526) | 10.397027 / 10.191392 (0.205635) | 0.141748 / 0.680424 (-0.538676) | 0.014738 / 0.534201 (-0.519463) | 0.300056 / 0.579283 (-0.279227) | 0.127442 / 0.434364 (-0.306922) | 0.345013 / 0.540337 (-0.195324) | 0.449598 / 1.386936 (-0.937338) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70bac27ef861b2b11f581a291a6b76adeee24f98 \"CML watermark\")\n"
] | 2024-08-21T11:14:59 | 2024-08-21T11:42:35 | 2024-08-21T11:36:50 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7119",
"merged_at": "2024-08-21T11:36:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7119"
} | Install transformers with numpy-2 CI.
Note that transformers no longer pins numpy < 2 since transformers-4.43.0:
- https://github.com/huggingface/transformers/pull/32018
- https://github.com/huggingface/transformers/releases/tag/v4.43.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7118/comments | https://api.github.com/repos/huggingface/datasets/issues/7118/events | https://github.com/huggingface/datasets/pull/7118 | 2,477,676,893 | PR_kwDODunzps549yu4 | 7,118 | Allow numpy-2.1 and test it without audio extra | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_7118). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005674 / 0.011353 (-0.005679) | 0.003919 / 0.011008 (-0.007089) | 0.062665 / 0.038508 (0.024157) | 0.031750 / 0.023109 (0.008641) | 0.234809 / 0.275898 (-0.041089) | 0.264454 / 0.323480 (-0.059026) | 0.004265 / 0.007986 (-0.003720) | 0.002757 / 0.004328 (-0.001572) | 0.048921 / 0.004250 (0.044671) | 0.050765 / 0.037052 (0.013713) | 0.246185 / 0.258489 (-0.012305) | 0.287011 / 0.293841 (-0.006829) | 0.030754 / 0.128546 (-0.097792) | 0.012368 / 0.075646 (-0.063278) | 0.203841 / 0.419271 (-0.215431) | 0.037579 / 0.043533 (-0.005953) | 0.238165 / 0.255139 (-0.016974) | 0.264375 / 0.283200 (-0.018824) | 0.018663 / 0.141683 (-0.123020) | 1.143897 / 1.452155 (-0.308258) | 1.218130 / 1.492716 (-0.274586) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102112 / 0.018006 (0.084106) | 0.303214 / 0.000490 (0.302724) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019401 / 0.037411 (-0.018010) | 0.062444 / 0.014526 (0.047919) | 0.076497 / 0.176557 (-0.100060) | 0.122309 / 0.737135 (-0.614826) | 0.077178 / 0.296338 (-0.219160) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282931 / 0.215209 (0.067722) | 2.783587 / 2.077655 (0.705932) | 1.464076 / 1.504120 (-0.040044) | 1.333912 / 1.541195 (-0.207282) | 1.367391 / 1.468490 (-0.101099) | 0.736702 / 4.584777 (-3.848075) | 2.413625 / 3.745712 (-1.332087) | 2.949549 / 5.269862 (-2.320313) | 1.910308 / 4.565676 (-2.655369) | 0.077419 / 0.424275 (-0.346856) | 0.005159 / 0.007607 (-0.002448) | 0.345595 / 0.226044 (0.119551) | 3.433205 / 2.268929 (1.164277) | 1.844443 / 55.444624 (-53.600181) | 1.527475 / 6.876477 (-5.349002) | 1.544315 / 2.142072 (-0.597758) | 0.803942 / 4.805227 (-4.001285) | 0.134131 / 6.500664 (-6.366533) | 0.042638 / 0.075469 (-0.032831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975158 / 1.841788 (-0.866629) | 11.726187 / 8.074308 (3.651879) | 9.403347 / 10.191392 (-0.788045) | 0.131583 / 0.680424 (-0.548840) | 0.014358 / 0.534201 (-0.519843) | 0.301360 / 0.579283 (-0.277923) | 0.266529 / 0.434364 (-0.167835) | 0.341669 / 0.540337 (-0.198668) | 0.425751 / 1.386936 (-0.961186) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005911 / 0.011353 (-0.005442) | 0.004093 / 0.011008 (-0.006915) | 0.049936 / 0.038508 (0.011428) | 0.031828 / 0.023109 (0.008719) | 0.273874 / 0.275898 (-0.002025) | 0.296871 / 0.323480 (-0.026609) | 0.004470 / 0.007986 (-0.003516) | 0.002902 / 0.004328 (-0.001426) | 0.048848 / 0.004250 (0.044597) | 0.042320 / 0.037052 (0.005268) | 0.287957 / 0.258489 (0.029468) | 0.321033 / 0.293841 (0.027192) | 0.032996 / 0.128546 (-0.095550) | 0.012244 / 0.075646 (-0.063403) | 0.060493 / 0.419271 (-0.358779) | 0.034630 / 0.043533 (-0.008902) | 0.277254 / 0.255139 (0.022115) | 0.292822 / 0.283200 (0.009623) | 0.017966 / 0.141683 (-0.123717) | 1.167432 / 1.452155 (-0.284723) | 1.231837 / 1.492716 (-0.260880) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099970 / 0.018006 (0.081964) | 0.313240 / 0.000490 (0.312750) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022928 / 0.037411 (-0.014483) | 0.077058 / 0.014526 (0.062532) | 0.090147 / 0.176557 (-0.086409) | 0.129416 / 0.737135 (-0.607720) | 0.091021 / 0.296338 (-0.205318) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300697 / 0.215209 (0.085488) | 2.944649 / 2.077655 (0.866995) | 1.609106 / 1.504120 (0.104986) | 1.483762 / 1.541195 (-0.057433) | 1.519433 / 1.468490 (0.050943) | 0.714129 / 4.584777 (-3.870648) | 0.991848 / 3.745712 (-2.753864) | 2.966340 / 5.269862 (-2.303521) | 1.905427 / 4.565676 (-2.660249) | 0.079041 / 0.424275 (-0.345234) | 0.005671 / 0.007607 (-0.001936) | 0.356037 / 0.226044 (0.129993) | 3.504599 / 2.268929 (1.235670) | 1.979207 / 55.444624 (-53.465417) | 1.695030 / 6.876477 (-5.181447) | 1.703978 / 2.142072 (-0.438095) | 0.800871 / 4.805227 (-4.004357) | 0.134414 / 6.500664 (-6.366250) | 0.041743 / 0.075469 (-0.033726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029879 / 1.841788 (-0.811909) | 12.132252 / 8.074308 (4.057944) | 10.596576 / 10.191392 (0.405184) | 0.132237 / 0.680424 (-0.548187) | 0.016239 / 0.534201 (-0.517962) | 0.301831 / 0.579283 (-0.277452) | 0.127966 / 0.434364 (-0.306398) | 0.341081 / 0.540337 (-0.199256) | 0.448996 / 1.386936 (-0.937940) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0a0fa48a68c3502edfa50273b881f909e4e6e70c \"CML watermark\")\n"
] | 2024-08-21T10:29:35 | 2024-08-21T11:05:03 | 2024-08-21T10:58:15 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7118",
"merged_at": "2024-08-21T10:58:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7118"
} | Allow numpy-2.1 and test it without audio extra.
This PR reverts:
- #7114
Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7118/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7117/comments | https://api.github.com/repos/huggingface/datasets/issues/7117/events | https://github.com/huggingface/datasets/issues/7117 | 2,476,555,659 | I_kwDODunzps6TnT2L | 7,117 | Audio dataset load everything in RAM and is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4",
"events_url": "https://api.github.com/users/Jourdelune/events{/privacy}",
"followers_url": "https://api.github.com/users/Jourdelune/followers",
"following_url": "https://api.github.com/users/Jourdelune/following{/other_user}",
"gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jourdelune",
"id": 64205064,
"login": "Jourdelune",
"node_id": "MDQ6VXNlcjY0MjA1MDY0",
"organizations_url": "https://api.github.com/users/Jourdelune/orgs",
"received_events_url": "https://api.github.com/users/Jourdelune/received_events",
"repos_url": "https://api.github.com/users/Jourdelune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jourdelune"
} | [] | open | false | null | [] | null | [
"Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\r\n return {\"transcribed\": True}\r\n```\r\n\r\nPS: no need to iter on the dataset to trigger the `map` function on a `Dataset` - `map` runs directly when it's called (contrary to `IterableDataset` taht you can get when streaming, which are lazy)",
"No, that doesn't change anything, I manage to solve this problem by setting with_indices=True in the map function and directly retrieving the audio corresponding to the index.\r\n```py\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\nds = load_dataset(\"WaveGenAI/audios2\", split=\"train[:50]\")\r\n\r\n\r\n# map the dataset\r\ndef transcribe_audio(row, idx):\r\n audio = ds[idx][\"audio\"] # get the audio but do nothing with it\r\n row[\"transcribed\"] = True\r\n return row\r\n\r\n\r\ntime1 = time.time()\r\nds = ds.map(\r\n transcribe_audio, with_indices=True\r\n) # set low writer_batch_size to avoid memory issues\r\n\r\nfor row in ds:\r\n pass # do nothing, just iterate to trigger the map function\r\n\r\nprint(f\"Time taken: {time.time() - time1:.2f} seconds\")\r\n```",
"Hmm maybe accessing `row[\"audio\"]` makes `map()` reencode what's inside `row[\"audio\"]` in case there are in-place modifications"
] | 2024-08-20T21:18:12 | 2024-08-26T13:11:55 | null | NONE | null | null | null | Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes.
To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow.
To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`)
### Steps to reproduce the bug
Hug ram usage but fast (but actually slow when saving the dataset):
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio
)
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
Low ram usage but very very slow:
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio, writer_batch_size=10
) # set low writer_batch_size to avoid memory issues
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
### Expected behavior
I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio).
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.10.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2024.6.1
# Extra
The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem.
```py
import argparse
from datasets import load_dataset
parser = argparse.ArgumentParser()
parser.add_argument("--folder", help="folder path", default="/media/works/test/")
args = parser.parse_args()
dataset = load_dataset("audiofolder", data_dir=args.folder)
# push the dataset to hub
dataset.push_to_hub("WaveGenAI/audios")
```
Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7117/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7116/comments | https://api.github.com/repos/huggingface/datasets/issues/7116/events | https://github.com/huggingface/datasets/issues/7116 | 2,475,522,721 | I_kwDODunzps6TjXqh | 7,116 | datasets cannot handle nested json if features is given. | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420"
} | [] | closed | false | null | [] | null | [
"Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```",
"> Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n> \r\n> ```python\r\n> ds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n> 'ref1': datasets.Value('string'),\r\n> 'ref2': datasets.Value('string'),\r\n> 'cuts': [{\r\n> \"cut1\": datasets.Value(\"uint16\"),\r\n> \"cut2\": datasets.Value(\"uint16\")\r\n> }]\r\n> }))\r\n> ```\r\nThank you!\r\n",
"It works."
] | 2024-08-20T12:27:49 | 2024-09-03T10:18:23 | 2024-09-03T10:18:07 | NONE | null | null | null | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7116/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7115/comments | https://api.github.com/repos/huggingface/datasets/issues/7115/events | https://github.com/huggingface/datasets/issues/7115 | 2,475,363,142 | I_kwDODunzps6TiwtG | 7,115 | module 'pyarrow.lib' has no attribute 'ListViewType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/175128880?v=4",
"events_url": "https://api.github.com/users/neurafusionai/events{/privacy}",
"followers_url": "https://api.github.com/users/neurafusionai/followers",
"following_url": "https://api.github.com/users/neurafusionai/following{/other_user}",
"gists_url": "https://api.github.com/users/neurafusionai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neurafusionai",
"id": 175128880,
"login": "neurafusionai",
"node_id": "U_kgDOCnBBMA",
"organizations_url": "https://api.github.com/users/neurafusionai/orgs",
"received_events_url": "https://api.github.com/users/neurafusionai/received_events",
"repos_url": "https://api.github.com/users/neurafusionai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neurafusionai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neurafusionai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neurafusionai"
} | [] | open | false | null | [] | null | [
"https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked"
] | 2024-08-20T11:05:44 | 2024-08-20T12:06:20 | null | NONE | null | null | null | ### Describe the bug
Code:
`!pipuninstall -y pyarrow
!pip install --no-cache-dir pyarrow
!pip uninstall -y pyarrow
!pip install pyarrow --no-cache-dir
!pip install --upgrade datasets transformers pyarrow
!pip install pyarrow.parquet
! pip install pyarrow-core libparquet
!pip install pyarrow --no-cache-dir
!pip install pyarrow
!pip install transformers
!pip install --upgrade datasets
!pip install datasets
! pip install pyarrow
! pip install pyarrow.lib
! pip install pyarrow.parquet
!pip install transformers
import pyarrow as pa
print(pa.__version__)
from datasets import load_dataset
import pyarrow.parquet as pq
import pyarrow.lib as lib
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset
from transformers import AutoTokenizer
! pip install pyarrow-core libparquet
# Load the dataset for content moderation
dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
# Tokenize the dataset
def tokenize_function(examples):
return tokenizer(examples['text'], padding="max_length", truncation=True)
# Apply tokenization to the entire dataset
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# Check the first few tokenized samples
print(tokenized_datasets['train'][0])
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
# Load the model
model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
eval_strategy="epoch", #
save_strategy="epoch",
logging_dir="./logs",
learning_rate=2e-5,
)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
)
# Train the model
trainer.train()
# Evaluate the model
trainer.evaluate()
`
AttributeError Traceback (most recent call last)
[<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>()
20
21
---> 22 from datasets import load_dataset
23 import pyarrow.parquet as pq
24 import pyarrow.lib as lib
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
15 __version__ = "2.21.0"
16
---> 17 from .arrow_dataset import Dataset
18 from .arrow_reader import ReadInstruction
19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
74
75 from . import config
---> 76 from .arrow_reader import ArrowReader
77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
78 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
31
32 try:
---> 33 import pyarrow._parquet as _parquet
34 except ImportError as exc:
35 raise ImportError(
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
### Steps to reproduce the bug
https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing
### Expected behavior
Looks like there is an issue with datasets and pyarrow
### Environment info
google colab
python
huggingface
Found existing installation: pyarrow 17.0.0
Uninstalling pyarrow-17.0.0:
Successfully uninstalled pyarrow-17.0.0
Collecting pyarrow
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4)
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB)
ββββββββββββββββββββββββββββββββββββββββ 39.9/39.9 MB 188.9 MB/s eta 0:00:00
Installing collected packages: pyarrow
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
Successfully installed pyarrow-17.0.0
WARNING: The following packages were previously imported in this runtime:
[pyarrow]
You must restart the runtime in order to use newly installed versions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7115/timeline | null | null | false |
End of preview. Expand
in Dataset Viewer.
Dataset card for Github issues demo
Dataset summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the π€ Datasets repository. It is intended for educational purposes and can be used for semantic search or multulabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
- Downloads last month
- 38