url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.13B
| node_id
stringlengths 18
32
| number
int64 1
6.66k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
sequence | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6662/comments | https://api.github.com/repos/huggingface/datasets/issues/6662/events | https://github.com/huggingface/datasets/pull/6662 | 2,132,425,812 | PR_kwDODunzps5mwgKP | 6,662 | fix: show correct package name to install biopython | {
"login": "BioGeek",
"id": 59344,
"node_id": "MDQ6VXNlcjU5MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/59344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BioGeek",
"html_url": "https://github.com/BioGeek",
"followers_url": "https://api.github.com/users/BioGeek/followers",
"following_url": "https://api.github.com/users/BioGeek/following{/other_user}",
"gists_url": "https://api.github.com/users/BioGeek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BioGeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BioGeek/subscriptions",
"organizations_url": "https://api.github.com/users/BioGeek/orgs",
"repos_url": "https://api.github.com/users/BioGeek/repos",
"events_url": "https://api.github.com/users/BioGeek/events{/privacy}",
"received_events_url": "https://api.github.com/users/BioGeek/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2024-02-13T14:15:04 | 2024-02-13T14:16:08 | null | NONE | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6662",
"html_url": "https://github.com/huggingface/datasets/pull/6662",
"diff_url": "https://github.com/huggingface/datasets/pull/6662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6662.patch",
"merged_at": null
} | When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("InstaDeepAI/multi_species_genomes")
/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py:1454: FutureWarning: The repository for InstaDeepAI/multi_species_genomes contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/InstaDeepAI/multi_species_genomes
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Downloading builder script: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.51k/7.51k [00:00<00:00, 7.67MB/s]
Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.2k/17.2k [00:00<00:00, 11.0MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2548, in load_dataset
builder_instance = load_dataset_builder(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2220, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1844, in dataset_module_factory
).get_module()
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1466, in get_module
local_imports = _download_additional_modules(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 346, in _download_additional_modules
raise ImportError(
ImportError: To be able to use InstaDeepAI/multi_species_genomes, you need to install the following dependency: Bio.
Please install it using 'pip install Bio' for instance.
>>>
```
`Bio` comes from the `biopython` package that can be installed with `pip install biopython`.
This PR adds special logic to show the correct package name in the error message of ` _download_additional_modules`, similarly as is done for `sklearn` / `scikit-learn` already. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6662/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6661/comments | https://api.github.com/repos/huggingface/datasets/issues/6661/events | https://github.com/huggingface/datasets/issues/6661 | 2,132,296,267 | I_kwDODunzps5_GEJL | 6,661 | Import error on Google Colab | {
"login": "kithogue",
"id": 16103566,
"node_id": "MDQ6VXNlcjE2MTAzNTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/16103566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kithogue",
"html_url": "https://github.com/kithogue",
"followers_url": "https://api.github.com/users/kithogue/followers",
"following_url": "https://api.github.com/users/kithogue/following{/other_user}",
"gists_url": "https://api.github.com/users/kithogue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kithogue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kithogue/subscriptions",
"organizations_url": "https://api.github.com/users/kithogue/orgs",
"repos_url": "https://api.github.com/users/kithogue/repos",
"events_url": "https://api.github.com/users/kithogue/events{/privacy}",
"received_events_url": "https://api.github.com/users/kithogue/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert the `import os; os.kill(os.getpid(), 9)` cell between `!pip install -U datasets` and `import datasets` to do the same programmatically.",
"One possible cause might be the one pointed out by @mariosasko above, and you get the following warning on Colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\n\r\nOn the other hand, if the old version of `pyarrow` is not previously imported (before the installation of `datasets`), the reported issue here is not reproducible: `datasets` can be installed, imported and used on Colab."
] | 2024-02-13T13:12:40 | 2024-02-13T16:59:49 | null | NONE | null | null | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import datasets`
### Expected behavior
Should be possible to use the library
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6661/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6660/comments | https://api.github.com/repos/huggingface/datasets/issues/6660/events | https://github.com/huggingface/datasets/pull/6660 | 2,131,977,011 | PR_kwDODunzps5mu9wU | 6,660 | Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes | {
"login": "mohalisad",
"id": 23399590,
"node_id": "MDQ6VXNlcjIzMzk5NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/23399590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohalisad",
"html_url": "https://github.com/mohalisad",
"followers_url": "https://api.github.com/users/mohalisad/followers",
"following_url": "https://api.github.com/users/mohalisad/following{/other_user}",
"gists_url": "https://api.github.com/users/mohalisad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohalisad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohalisad/subscriptions",
"organizations_url": "https://api.github.com/users/mohalisad/orgs",
"repos_url": "https://api.github.com/users/mohalisad/repos",
"events_url": "https://api.github.com/users/mohalisad/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohalisad/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2024-02-13T10:24:33 | 2024-02-13T10:24:33 | null | NONE | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6660",
"html_url": "https://github.com/huggingface/datasets/pull/6660",
"diff_url": "https://github.com/huggingface/datasets/pull/6660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6660.patch",
"merged_at": null
} | This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example:
```python
from datasets import Dataset, Sequence, Value, Features
def gen():
for i in range(100):
yield {'seq': list(range(i, i + 20))}
ds = Dataset.from_generator(gen, features=Features({'seq': Sequence(feature=Value(dtype='uint16'), length=-1)}))
ds.set_format('torch')
print(ds[0])
```
This code snippet triggers the following error due to the inability to convert numpy.uint16 arrays to a PyTorch-supported format:
```
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
```
This PR introduces an automatic mechanism to convert np.uint16 and np.uint32 datatypes to np.int64 for seamless compatibility with PyTorch formats, simplifying workflows and improving developer experience by eliminating the need for manual conversion handling. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6660/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6660/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6659/comments | https://api.github.com/repos/huggingface/datasets/issues/6659/events | https://github.com/huggingface/datasets/pull/6659 | 2,129,229,810 | PR_kwDODunzps5mlmmo | 6,659 | Change default compression argument for JsonDatasetWriter | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6659). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-11T23:49:07 | 2024-02-12T20:01:22 | null | NONE | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6659",
"html_url": "https://github.com/huggingface/datasets/pull/6659",
"diff_url": "https://github.com/huggingface/datasets/pull/6659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6659.patch",
"merged_at": null
} | Change default compression type from `None` to "infer", to align with pandas' defaults.
Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html) for compression, datasets enforce `None` as default. This, likely, confuses user, as they expect the same behaviour, i.e they expect that if they name their output file as "dataset.jsonl.zst" then the compression would be inferred as "zstd" and file will be compressed before writing.
Moreover, while it is probably outside of the scope of this pull request, `compression` argument needs to be capable of taking `dict` as input (along with `str`), as it does in pandas, in order to allow user to specify compression parameters. Current implementation will likely fail with `NotImplementedError`, as it expects either `None` or `str` specifying compression algo. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6659/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6658/comments | https://api.github.com/repos/huggingface/datasets/issues/6658/events | https://github.com/huggingface/datasets/pull/6658 | 2,129,158,371 | PR_kwDODunzps5mlZyb | 6,658 | [Resumable IterableDataset] Add IterableDataset state_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6658). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-11T20:35:52 | 2024-02-12T12:24:32 | null | MEMBER | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6658",
"html_url": "https://github.com/huggingface/datasets/pull/6658",
"diff_url": "https://github.com/huggingface/datasets/pull/6658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6658.patch",
"merged_at": null
} | A simple implementation of a mechanism to resume an IterableDataset.
This is WIP and untested.
Example:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"a": range(5)}).to_iterable_dataset(num_shards=3)
ds = concatenate_datasets([ds] * 2)
print(f"{ds.state_dict()=}")
for i, example in enumerate(ds):
print(example)
if i == 6:
state_dict = ds.state_dict()
ds.load_state_dict(state_dict)
print(f"{ds.state_dict()=}")
for example in ds:
print(example)
```
returns
```
ds.state_dict()={'ex_iterable_idx': 0, 'ex_iterables': [{'shard_idx': 0, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 0}]}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
ds.state_dict()={'ex_iterable_idx': 1, 'ex_iterables': [{'shard_idx': 3, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 2}]}
{'a': 2}
{'a': 3}
{'a': 4}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6658/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6658/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6657/comments | https://api.github.com/repos/huggingface/datasets/issues/6657/events | https://github.com/huggingface/datasets/issues/6657 | 2,129,147,085 | I_kwDODunzps5-6DTN | 6,657 | Release not pushed to conda channel | {
"login": "atulsaurav",
"id": 7138162,
"node_id": "MDQ6VXNlcjcxMzgxNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7138162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atulsaurav",
"html_url": "https://github.com/atulsaurav",
"followers_url": "https://api.github.com/users/atulsaurav/followers",
"following_url": "https://api.github.com/users/atulsaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/atulsaurav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atulsaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atulsaurav/subscriptions",
"organizations_url": "https://api.github.com/users/atulsaurav/orgs",
"repos_url": "https://api.github.com/users/atulsaurav/repos",
"events_url": "https://api.github.com/users/atulsaurav/events{/privacy}",
"received_events_url": "https://api.github.com/users/atulsaurav/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @atulsaurav.\r\n\r\nWe are investigating the issue. ",
"I can't fix this issue because I do not appear as a team member of the huggingface datasets project: https://anaconda.org/huggingface/datasets\r\n\r\n@lhoestq could you please add `datasets` team members to the corresponding Anaconda project?\r\n\r\nOnce this done, I could recreate and update the Anaconda token, as mentioned above it seems the current one has expired.",
"I think @LysandreJik has access ?"
] | 2024-02-11T20:05:17 | 2024-02-12T14:29:36 | null | NONE | null | null | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?
![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700)
### Steps to reproduce the bug
Please see this actions [link](https://github.com/huggingface/datasets/actions/runs/7842473662)
### Expected behavior
The action runs successfully and the latest release is pushed to HuggingFace conda channel
### Environment info
Not applicable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6657/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6656/comments | https://api.github.com/repos/huggingface/datasets/issues/6656/events | https://github.com/huggingface/datasets/issues/6656 | 2,127,338,377 | I_kwDODunzps5-zJuJ | 6,656 | Error when loading a big local json file | {
"login": "Riccorl",
"id": 10062216,
"node_id": "MDQ6VXNlcjEwMDYyMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Riccorl",
"html_url": "https://github.com/Riccorl",
"followers_url": "https://api.github.com/users/Riccorl/followers",
"following_url": "https://api.github.com/users/Riccorl/following{/other_user}",
"gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions",
"organizations_url": "https://api.github.com/users/Riccorl/orgs",
"repos_url": "https://api.github.com/users/Riccorl/repos",
"events_url": "https://api.github.com/users/Riccorl/events{/privacy}",
"received_events_url": "https://api.github.com/users/Riccorl/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2024-02-09T15:14:21 | 2024-02-09T15:14:21 | null | NONE | null | null | ### Describe the bug
When trying to load big json files from a local directory, `load_dataset` throws the following error
```
Traceback (most recent call last):
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
### Steps to reproduce the bug
1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz`
2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")`
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-train.json"], split="train")
```
A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-dev.json"], split="train")
```
### Expected behavior
It should load normally
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6656/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6655/comments | https://api.github.com/repos/huggingface/datasets/issues/6655/events | https://github.com/huggingface/datasets/issues/6655 | 2,127,020,042 | I_kwDODunzps5-x8AK | 6,655 | Cannot load the dataset go_emotions | {
"login": "arame",
"id": 688324,
"node_id": "MDQ6VXNlcjY4ODMyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arame",
"html_url": "https://github.com/arame",
"followers_url": "https://api.github.com/users/arame/followers",
"following_url": "https://api.github.com/users/arame/following{/other_user}",
"gists_url": "https://api.github.com/users/arame/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arame/subscriptions",
"organizations_url": "https://api.github.com/users/arame/orgs",
"repos_url": "https://api.github.com/users/arame/repos",
"events_url": "https://api.github.com/users/arame/events{/privacy}",
"received_events_url": "https://api.github.com/users/arame/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n",
"The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.",
"> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.",
"I tried running the code today and the problem appears to be fixed."
] | 2024-02-09T12:15:39 | 2024-02-12T09:35:55 | null | NONE | null | null | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions")
[2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data
File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode(
[2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) )
[2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder
-> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder(
[2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path,
[2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name,
[2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir,
[2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files,
[2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir,
[2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features,
[2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config,
[2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode,
[2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision,
[2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token,
[2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options,
[2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code,
[2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None,
...
---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase):
[64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase)
[66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Steps to reproduce the bug
```
from datasets import load_dataset
go_emotions = load_dataset("go_emotions")
```
### Expected behavior
Should simply load the variable with the data from the file
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.16.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6655/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6654/comments | https://api.github.com/repos/huggingface/datasets/issues/6654/events | https://github.com/huggingface/datasets/issues/6654 | 2,126,939,358 | I_kwDODunzps5-xoTe | 6,654 | Batched dataset map throws exception that cannot cast fixed length array to Sequence | {
"login": "keesjandevries",
"id": 1029671,
"node_id": "MDQ6VXNlcjEwMjk2NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keesjandevries",
"html_url": "https://github.com/keesjandevries",
"followers_url": "https://api.github.com/users/keesjandevries/followers",
"following_url": "https://api.github.com/users/keesjandevries/following{/other_user}",
"gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions",
"organizations_url": "https://api.github.com/users/keesjandevries/orgs",
"repos_url": "https://api.github.com/users/keesjandevries/repos",
"events_url": "https://api.github.com/users/keesjandevries/events{/privacy}",
"received_events_url": "https://api.github.com/users/keesjandevries/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n",
"Amazing! It's indeed fixed now. Thanks!"
] | 2024-02-09T11:23:19 | 2024-02-12T08:26:53 | 2024-02-12T08:26:53 | NONE | null | null | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths.
### Steps to reproduce the bug
Create virtual environment and activate
```
virtualenv venv
source venv/bin/activate
```
Then install the datasets package (I'm using the latest version)
```
pip install datasets==2.16.1
```
Then run
```python
# bug.py
from datasets import Dataset
from datasets.features import Features, Sequence, Value
data = {
"num": [[1, 2], [3, 4]],
}
features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)})
dataset = Dataset.from_dict(data, features=features)
dataset.map(lambda x: x, batched=True, batch_size=1)
```
### Expected behavior
I get the following stack trace
```
Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s]
Traceback (most recent call last):
File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module>
dataset.map(lambda x: x, batched=True, batch_size=1)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[2]
to
Sequence(feature=Value(dtype='int32', id=None), length=2, id=None)
```
After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py`
```python
# datasets/table.py
...
2093 if feature.length * len(array) == len(array_values):
2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)
...
```
### Environment info
Platform: MacOS
Datasets version: datasets==2.16.1
Python version: 3.9.6 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6654/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6653/comments | https://api.github.com/repos/huggingface/datasets/issues/6653/events | https://github.com/huggingface/datasets/pull/6653 | 2,126,831,929 | PR_kwDODunzps5mdv5S | 6,653 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003424 / 0.011008 (-0.007584) | 0.064195 / 0.038508 (0.025687) | 0.031742 / 0.023109 (0.008633) | 0.244774 / 0.275898 (-0.031124) | 0.268529 / 0.323480 (-0.054951) | 0.003970 / 0.007986 (-0.004016) | 0.002657 / 0.004328 (-0.001672) | 0.048847 / 0.004250 (0.044597) | 0.042196 / 0.037052 (0.005144) | 0.266044 / 0.258489 (0.007555) | 0.282400 / 0.293841 (-0.011441) | 0.027617 / 0.128546 (-0.100929) | 0.010400 / 0.075646 (-0.065246) | 0.205910 / 0.419271 (-0.213362) | 0.035820 / 0.043533 (-0.007713) | 0.247750 / 0.255139 (-0.007389) | 0.267318 / 0.283200 (-0.015882) | 0.017980 / 0.141683 (-0.123703) | 1.107263 / 1.452155 (-0.344892) | 1.173208 / 1.492716 (-0.319509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095830 / 0.018006 (0.077824) | 0.293891 / 0.000490 (0.293401) | 0.000257 / 0.000200 (0.000057) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018138 / 0.037411 (-0.019273) | 0.061631 / 0.014526 (0.047105) | 0.073038 / 0.176557 (-0.103519) | 0.118317 / 0.737135 (-0.618818) | 0.074190 / 0.296338 (-0.222148) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287026 / 0.215209 (0.071817) | 2.786137 / 2.077655 (0.708482) | 1.472575 / 1.504120 (-0.031544) | 1.346919 / 1.541195 (-0.194276) | 1.388535 / 1.468490 (-0.079955) | 0.565731 / 4.584777 (-4.019046) | 2.382573 / 3.745712 (-1.363139) | 2.736926 / 5.269862 (-2.532935) | 1.716517 / 4.565676 (-2.849159) | 0.062168 / 0.424275 (-0.362108) | 0.004924 / 0.007607 (-0.002683) | 0.341897 / 0.226044 (0.115853) | 3.355715 / 2.268929 (1.086787) | 1.837014 / 55.444624 (-53.607611) | 1.532063 / 6.876477 (-5.344414) | 1.548193 / 2.142072 (-0.593880) | 0.634995 / 4.805227 (-4.170232) | 0.115622 / 6.500664 (-6.385042) | 0.042252 / 0.075469 (-0.033217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970713 / 1.841788 (-0.871075) | 11.727576 / 8.074308 (3.653268) | 9.806524 / 10.191392 (-0.384868) | 0.127622 / 0.680424 (-0.552802) | 0.014140 / 0.534201 (-0.520061) | 0.286832 / 0.579283 (-0.292451) | 0.266556 / 0.434364 (-0.167808) | 0.325940 / 0.540337 (-0.214398) | 0.421839 / 1.386936 (-0.965097) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005495 / 0.011353 (-0.005858) | 0.003676 / 0.011008 (-0.007332) | 0.054361 / 0.038508 (0.015853) | 0.030743 / 0.023109 (0.007633) | 0.277200 / 0.275898 (0.001302) | 0.313459 / 0.323480 (-0.010021) | 0.004316 / 0.007986 (-0.003670) | 0.002750 / 0.004328 (-0.001578) | 0.049491 / 0.004250 (0.045241) | 0.044268 / 0.037052 (0.007215) | 0.292529 / 0.258489 (0.034039) | 0.326524 / 0.293841 (0.032683) | 0.048040 / 0.128546 (-0.080507) | 0.010390 / 0.075646 (-0.065256) | 0.058459 / 0.419271 (-0.360813) | 0.033765 / 0.043533 (-0.009768) | 0.276003 / 0.255139 (0.020864) | 0.297299 / 0.283200 (0.014099) | 0.018532 / 0.141683 (-0.123151) | 1.157639 / 1.452155 (-0.294515) | 1.220492 / 1.492716 (-0.272225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093903 / 0.018006 (0.075897) | 0.303005 / 0.000490 (0.302515) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021580 / 0.037411 (-0.015831) | 0.076176 / 0.014526 (0.061650) | 0.086998 / 0.176557 (-0.089558) | 0.124148 / 0.737135 (-0.612987) | 0.088613 / 0.296338 (-0.207725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300623 / 0.215209 (0.085414) | 2.911876 / 2.077655 (0.834221) | 1.588398 / 1.504120 (0.084278) | 1.471251 / 1.541195 (-0.069944) | 1.505528 / 1.468490 (0.037038) | 0.570635 / 4.584777 (-4.014142) | 2.485769 / 3.745712 (-1.259943) | 2.785355 / 5.269862 (-2.484507) | 1.752944 / 4.565676 (-2.812732) | 0.063146 / 0.424275 (-0.361129) | 0.004980 / 0.007607 (-0.002627) | 0.354577 / 0.226044 (0.128532) | 3.477181 / 2.268929 (1.208253) | 1.951906 / 55.444624 (-53.492718) | 1.677169 / 6.876477 (-5.199307) | 1.686338 / 2.142072 (-0.455735) | 0.637156 / 4.805227 (-4.168071) | 0.117732 / 6.500664 (-6.382932) | 0.041091 / 0.075469 (-0.034378) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010071 / 1.841788 (-0.831717) | 12.172242 / 8.074308 (4.097934) | 10.422811 / 10.191392 (0.231419) | 0.137185 / 0.680424 (-0.543239) | 0.014643 / 0.534201 (-0.519558) | 0.287248 / 0.579283 (-0.292035) | 0.272779 / 0.434364 (-0.161585) | 0.331761 / 0.540337 (-0.208576) | 0.417266 / 1.386936 (-0.969670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9751fb14594d354e952f0ebdfaf31cb203b011e7 \"CML watermark\")\n"
] | 2024-02-09T10:12:02 | 2024-02-09T10:18:20 | 2024-02-09T10:12:12 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6653",
"html_url": "https://github.com/huggingface/datasets/pull/6653",
"diff_url": "https://github.com/huggingface/datasets/pull/6653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6653.patch",
"merged_at": "2024-02-09T10:12:12"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6653/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6652/comments | https://api.github.com/repos/huggingface/datasets/issues/6652/events | https://github.com/huggingface/datasets/pull/6652 | 2,126,760,798 | PR_kwDODunzps5mdgcv | 6,652 | Release: 2.17.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6652). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005207 / 0.011353 (-0.006145) | 0.003785 / 0.011008 (-0.007223) | 0.064221 / 0.038508 (0.025713) | 0.028981 / 0.023109 (0.005872) | 0.246215 / 0.275898 (-0.029683) | 0.268058 / 0.323480 (-0.055422) | 0.004028 / 0.007986 (-0.003958) | 0.002804 / 0.004328 (-0.001525) | 0.048878 / 0.004250 (0.044627) | 0.042641 / 0.037052 (0.005589) | 0.255590 / 0.258489 (-0.002899) | 0.287377 / 0.293841 (-0.006464) | 0.027772 / 0.128546 (-0.100774) | 0.010637 / 0.075646 (-0.065009) | 0.211526 / 0.419271 (-0.207746) | 0.035789 / 0.043533 (-0.007744) | 0.243042 / 0.255139 (-0.012097) | 0.268369 / 0.283200 (-0.014830) | 0.017907 / 0.141683 (-0.123776) | 1.138829 / 1.452155 (-0.313326) | 1.175732 / 1.492716 (-0.316984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094205 / 0.018006 (0.076199) | 0.304317 / 0.000490 (0.303827) | 0.000206 / 0.000200 (0.000006) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018424 / 0.037411 (-0.018987) | 0.061719 / 0.014526 (0.047193) | 0.073471 / 0.176557 (-0.103085) | 0.121577 / 0.737135 (-0.615558) | 0.075134 / 0.296338 (-0.221204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275178 / 0.215209 (0.059969) | 2.689222 / 2.077655 (0.611568) | 1.396680 / 1.504120 (-0.107439) | 1.278782 / 1.541195 (-0.262413) | 1.326632 / 1.468490 (-0.141858) | 0.566915 / 4.584777 (-4.017862) | 2.365928 / 3.745712 (-1.379784) | 2.785435 / 5.269862 (-2.484427) | 1.745131 / 4.565676 (-2.820546) | 0.062798 / 0.424275 (-0.361477) | 0.005107 / 0.007607 (-0.002500) | 0.330441 / 0.226044 (0.104396) | 3.266265 / 2.268929 (0.997337) | 1.792588 / 55.444624 (-53.652036) | 1.516021 / 6.876477 (-5.360455) | 1.562750 / 2.142072 (-0.579323) | 0.652964 / 4.805227 (-4.152264) | 0.117813 / 6.500664 (-6.382852) | 0.042372 / 0.075469 (-0.033097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010107 / 1.841788 (-0.831680) | 11.819910 / 8.074308 (3.745602) | 9.701673 / 10.191392 (-0.489719) | 0.178165 / 0.680424 (-0.502259) | 0.014438 / 0.534201 (-0.519763) | 0.297733 / 0.579283 (-0.281550) | 0.264914 / 0.434364 (-0.169450) | 0.324531 / 0.540337 (-0.215806) | 0.430207 / 1.386936 (-0.956729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005848 / 0.011353 (-0.005505) | 0.003870 / 0.011008 (-0.007138) | 0.050379 / 0.038508 (0.011871) | 0.031238 / 0.023109 (0.008129) | 0.276839 / 0.275898 (0.000941) | 0.299488 / 0.323480 (-0.023992) | 0.005143 / 0.007986 (-0.002842) | 0.002725 / 0.004328 (-0.001604) | 0.048184 / 0.004250 (0.043934) | 0.046232 / 0.037052 (0.009180) | 0.287058 / 0.258489 (0.028569) | 0.322659 / 0.293841 (0.028818) | 0.047598 / 0.128546 (-0.080949) | 0.011116 / 0.075646 (-0.064530) | 0.058252 / 0.419271 (-0.361019) | 0.033404 / 0.043533 (-0.010128) | 0.277650 / 0.255139 (0.022511) | 0.295610 / 0.283200 (0.012410) | 0.018124 / 0.141683 (-0.123559) | 1.135052 / 1.452155 (-0.317103) | 1.194261 / 1.492716 (-0.298456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095595 / 0.018006 (0.077588) | 0.306408 / 0.000490 (0.305918) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022027 / 0.037411 (-0.015385) | 0.076224 / 0.014526 (0.061698) | 0.087441 / 0.176557 (-0.089116) | 0.126636 / 0.737135 (-0.610499) | 0.089442 / 0.296338 (-0.206896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291315 / 0.215209 (0.076106) | 2.835304 / 2.077655 (0.757650) | 1.581102 / 1.504120 (0.076982) | 1.463046 / 1.541195 (-0.078149) | 1.481982 / 1.468490 (0.013492) | 0.559989 / 4.584777 (-4.024788) | 2.385262 / 3.745712 (-1.360450) | 2.773478 / 5.269862 (-2.496383) | 1.744427 / 4.565676 (-2.821249) | 0.062687 / 0.424275 (-0.361589) | 0.005149 / 0.007607 (-0.002458) | 0.374600 / 0.226044 (0.148555) | 3.376507 / 2.268929 (1.107579) | 1.935290 / 55.444624 (-53.509334) | 1.663227 / 6.876477 (-5.213250) | 1.678987 / 2.142072 (-0.463085) | 0.638970 / 4.805227 (-4.166258) | 0.120000 / 6.500664 (-6.380664) | 0.040862 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008795 / 1.841788 (-0.832993) | 12.275084 / 8.074308 (4.200776) | 10.340088 / 10.191392 (0.148696) | 0.136454 / 0.680424 (-0.543970) | 0.014404 / 0.534201 (-0.519797) | 0.289478 / 0.579283 (-0.289805) | 0.279243 / 0.434364 (-0.155121) | 0.330992 / 0.540337 (-0.209346) | 0.422043 / 1.386936 (-0.964893) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70633576ecf1f3f5e5cdfd8c9189246b3604f4b6 \"CML watermark\")\n"
] | 2024-02-09T09:25:01 | 2024-02-09T10:11:48 | 2024-02-09T10:05:35 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6652",
"html_url": "https://github.com/huggingface/datasets/pull/6652",
"diff_url": "https://github.com/huggingface/datasets/pull/6652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6652.patch",
"merged_at": "2024-02-09T10:05:35"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6652/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6651/comments | https://api.github.com/repos/huggingface/datasets/issues/6651/events | https://github.com/huggingface/datasets/issues/6651 | 2,126,649,626 | I_kwDODunzps5-whka | 6,651 | Slice splits support for datasets.load_from_disk | {
"login": "mhorlacher",
"id": 37439882,
"node_id": "MDQ6VXNlcjM3NDM5ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/37439882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhorlacher",
"html_url": "https://github.com/mhorlacher",
"followers_url": "https://api.github.com/users/mhorlacher/followers",
"following_url": "https://api.github.com/users/mhorlacher/following{/other_user}",
"gists_url": "https://api.github.com/users/mhorlacher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhorlacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhorlacher/subscriptions",
"organizations_url": "https://api.github.com/users/mhorlacher/orgs",
"repos_url": "https://api.github.com/users/mhorlacher/repos",
"events_url": "https://api.github.com/users/mhorlacher/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhorlacher/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2024-02-09T08:00:21 | 2024-02-09T08:00:21 | null | NONE | null | null | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. See https://www.nature.com/articles/s41551-023-01093-3.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset.
### Your contribution
Sure, if the devs think the feature request is sensible. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6651/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6650/comments | https://api.github.com/repos/huggingface/datasets/issues/6650/events | https://github.com/huggingface/datasets/issues/6650 | 2,125,680,991 | I_kwDODunzps5-s1Ff | 6,650 | AttributeError: 'InMemoryTable' object has no attribute '_batches' | {
"login": "matsuobasho",
"id": 13874772,
"node_id": "MDQ6VXNlcjEzODc0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matsuobasho",
"html_url": "https://github.com/matsuobasho",
"followers_url": "https://api.github.com/users/matsuobasho/followers",
"following_url": "https://api.github.com/users/matsuobasho/following{/other_user}",
"gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions",
"organizations_url": "https://api.github.com/users/matsuobasho/orgs",
"repos_url": "https://api.github.com/users/matsuobasho/repos",
"events_url": "https://api.github.com/users/matsuobasho/events{/privacy}",
"received_events_url": "https://api.github.com/users/matsuobasho/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```",
"No, it doesn't, it runs fine. But what's really strange is that the error just went away after I reran the data prep script for conversion from csv to a datasets object. I realize that's not very helpful since the problem isn't reproducible. "
] | 2024-02-08T17:11:26 | 2024-02-12T21:13:35 | null | NONE | null | null | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map
{
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp>
k: dataset.map(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single
arrow_formatted_shard = shard.with_format("arrow")
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format
dataset = copy.deepcopy(self)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__
memo[id(self._batches)] = list(self._batches)
AttributeError: 'InMemoryTable' object has no attribute '_batches'
```
### Steps to reproduce the bug
I'm running an MLOps flow using AzureML.
The error appears when I run the following function in my training script:
```python
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
seq_length),
batched=True,
batch_size=batch_size,
remove_columns=['col1', 'col2'])
```
```python
def tokenize_function(tok, seq_length, example)
# Pad so that each batch has the same sequence length
inp = tok(example['col1'], padding=True, truncation=True)
outp = tok(example['col2'], padding="max_length", max_length=seq_length)
res = {
'input_ids': inp['input_ids'],
'attention_mask': inp['attention_mask'],
'decoder_input_ids': outp['input_ids'],
'labels': outp['input_ids'],
'decoder_attention_mask': outp['attention_mask']
}
return res
```
### Expected behavior
Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23.
### Environment info
datasets 2.16.1
transformers 4.35.2
pyarrow 15.0.0
pyarrow-hotfix 0.6
torch 2.0.1
I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6650/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6649/comments | https://api.github.com/repos/huggingface/datasets/issues/6649/events | https://github.com/huggingface/datasets/pull/6649 | 2,124,940,213 | PR_kwDODunzps5mXRo8 | 6,649 | Minor multi gpu doc improvement | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6649). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005197 / 0.011353 (-0.006156) | 0.003469 / 0.011008 (-0.007539) | 0.062306 / 0.038508 (0.023798) | 0.028417 / 0.023109 (0.005308) | 0.241147 / 0.275898 (-0.034751) | 0.270910 / 0.323480 (-0.052569) | 0.003053 / 0.007986 (-0.004933) | 0.003343 / 0.004328 (-0.000985) | 0.048044 / 0.004250 (0.043794) | 0.043738 / 0.037052 (0.006686) | 0.259274 / 0.258489 (0.000785) | 0.282522 / 0.293841 (-0.011319) | 0.027807 / 0.128546 (-0.100739) | 0.010413 / 0.075646 (-0.065234) | 0.206322 / 0.419271 (-0.212950) | 0.035770 / 0.043533 (-0.007763) | 0.243465 / 0.255139 (-0.011674) | 0.261596 / 0.283200 (-0.021604) | 0.018613 / 0.141683 (-0.123070) | 1.115509 / 1.452155 (-0.336645) | 1.189403 / 1.492716 (-0.303314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.086075 / 0.018006 (0.068069) | 0.296140 / 0.000490 (0.295650) | 0.000198 / 0.000200 (-0.000002) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018238 / 0.037411 (-0.019173) | 0.061783 / 0.014526 (0.047257) | 0.072014 / 0.176557 (-0.104543) | 0.118746 / 0.737135 (-0.618389) | 0.073279 / 0.296338 (-0.223060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278281 / 0.215209 (0.063072) | 2.772209 / 2.077655 (0.694555) | 1.404503 / 1.504120 (-0.099617) | 1.274753 / 1.541195 (-0.266441) | 1.304394 / 1.468490 (-0.164096) | 0.556903 / 4.584777 (-4.027874) | 2.335428 / 3.745712 (-1.410284) | 2.712255 / 5.269862 (-2.557606) | 1.722252 / 4.565676 (-2.843425) | 0.061268 / 0.424275 (-0.363007) | 0.005029 / 0.007607 (-0.002578) | 0.326112 / 0.226044 (0.100067) | 3.207917 / 2.268929 (0.938988) | 1.743513 / 55.444624 (-53.701111) | 1.476418 / 6.876477 (-5.400059) | 1.489776 / 2.142072 (-0.652297) | 0.628181 / 4.805227 (-4.177046) | 0.115959 / 6.500664 (-6.384706) | 0.041854 / 0.075469 (-0.033615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969039 / 1.841788 (-0.872749) | 11.178646 / 8.074308 (3.104338) | 9.639716 / 10.191392 (-0.551676) | 0.139750 / 0.680424 (-0.540674) | 0.014230 / 0.534201 (-0.519971) | 0.285318 / 0.579283 (-0.293965) | 0.260788 / 0.434364 (-0.173576) | 0.324183 / 0.540337 (-0.216154) | 0.416326 / 1.386936 (-0.970610) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005149 / 0.011353 (-0.006204) | 0.003469 / 0.011008 (-0.007539) | 0.049761 / 0.038508 (0.011253) | 0.030723 / 0.023109 (0.007614) | 0.271562 / 0.275898 (-0.004336) | 0.297843 / 0.323480 (-0.025637) | 0.004296 / 0.007986 (-0.003690) | 0.002704 / 0.004328 (-0.001624) | 0.048890 / 0.004250 (0.044640) | 0.044776 / 0.037052 (0.007723) | 0.285490 / 0.258489 (0.027001) | 0.312888 / 0.293841 (0.019047) | 0.046239 / 0.128546 (-0.082307) | 0.010238 / 0.075646 (-0.065408) | 0.057968 / 0.419271 (-0.361304) | 0.033295 / 0.043533 (-0.010238) | 0.274320 / 0.255139 (0.019181) | 0.296199 / 0.283200 (0.012999) | 0.017856 / 0.141683 (-0.123827) | 1.147532 / 1.452155 (-0.304622) | 1.211647 / 1.492716 (-0.281070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089655 / 0.018006 (0.071649) | 0.297275 / 0.000490 (0.296785) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.075041 / 0.014526 (0.060515) | 0.085754 / 0.176557 (-0.090802) | 0.124512 / 0.737135 (-0.612623) | 0.086926 / 0.296338 (-0.209412) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290306 / 0.215209 (0.075097) | 2.847404 / 2.077655 (0.769749) | 1.606175 / 1.504120 (0.102055) | 1.483220 / 1.541195 (-0.057974) | 1.514551 / 1.468490 (0.046061) | 0.559332 / 4.584777 (-4.025445) | 2.403089 / 3.745712 (-1.342624) | 2.715179 / 5.269862 (-2.554683) | 1.688340 / 4.565676 (-2.877337) | 0.062057 / 0.424275 (-0.362218) | 0.004955 / 0.007607 (-0.002652) | 0.338909 / 0.226044 (0.112865) | 3.356882 / 2.268929 (1.087954) | 1.942259 / 55.444624 (-53.502366) | 1.675195 / 6.876477 (-5.201282) | 1.688158 / 2.142072 (-0.453914) | 0.637270 / 4.805227 (-4.167957) | 0.114314 / 6.500664 (-6.386350) | 0.040677 / 0.075469 (-0.034792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022126 / 1.841788 (-0.819661) | 11.783359 / 8.074308 (3.709051) | 10.247652 / 10.191392 (0.056260) | 0.138188 / 0.680424 (-0.542236) | 0.014850 / 0.534201 (-0.519351) | 0.287414 / 0.579283 (-0.291869) | 0.274393 / 0.434364 (-0.159971) | 0.327255 / 0.540337 (-0.213082) | 0.416355 / 1.386936 (-0.970581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#727a952367966a98b759d54f333b1e2c28cfd4d4 \"CML watermark\")\n"
] | 2024-02-08T11:17:24 | 2024-02-08T11:23:35 | 2024-02-08T11:17:35 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6649",
"html_url": "https://github.com/huggingface/datasets/pull/6649",
"diff_url": "https://github.com/huggingface/datasets/pull/6649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6649.patch",
"merged_at": "2024-02-08T11:17:35"
} | just added torch.no_grad and eval() | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6649/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6648/comments | https://api.github.com/repos/huggingface/datasets/issues/6648/events | https://github.com/huggingface/datasets/pull/6648 | 2,124,813,589 | PR_kwDODunzps5mW1MA | 6,648 | Document usage of hfh cli instead of git | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004951 / 0.011353 (-0.006402) | 0.003187 / 0.011008 (-0.007821) | 0.062959 / 0.038508 (0.024451) | 0.028037 / 0.023109 (0.004928) | 0.241374 / 0.275898 (-0.034524) | 0.262792 / 0.323480 (-0.060688) | 0.004132 / 0.007986 (-0.003854) | 0.002766 / 0.004328 (-0.001563) | 0.051416 / 0.004250 (0.047165) | 0.040957 / 0.037052 (0.003904) | 0.260760 / 0.258489 (0.002271) | 0.282018 / 0.293841 (-0.011823) | 0.027689 / 0.128546 (-0.100857) | 0.010433 / 0.075646 (-0.065214) | 0.211598 / 0.419271 (-0.207674) | 0.035447 / 0.043533 (-0.008086) | 0.244333 / 0.255139 (-0.010806) | 0.263192 / 0.283200 (-0.020008) | 0.016816 / 0.141683 (-0.124867) | 1.103188 / 1.452155 (-0.348967) | 1.179093 / 1.492716 (-0.313623) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092412 / 0.018006 (0.074406) | 0.301226 / 0.000490 (0.300736) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018146 / 0.037411 (-0.019265) | 0.061447 / 0.014526 (0.046921) | 0.072162 / 0.176557 (-0.104394) | 0.118965 / 0.737135 (-0.618170) | 0.073756 / 0.296338 (-0.222583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285361 / 0.215209 (0.070152) | 2.776928 / 2.077655 (0.699273) | 1.506859 / 1.504120 (0.002739) | 1.379119 / 1.541195 (-0.162075) | 1.401798 / 1.468490 (-0.066692) | 0.572512 / 4.584777 (-4.012265) | 2.403793 / 3.745712 (-1.341919) | 2.740496 / 5.269862 (-2.529366) | 1.714611 / 4.565676 (-2.851065) | 0.063496 / 0.424275 (-0.360780) | 0.005009 / 0.007607 (-0.002598) | 0.342438 / 0.226044 (0.116393) | 3.368129 / 2.268929 (1.099200) | 1.831200 / 55.444624 (-53.613424) | 1.553611 / 6.876477 (-5.322866) | 1.578116 / 2.142072 (-0.563956) | 0.653034 / 4.805227 (-4.152193) | 0.117724 / 6.500664 (-6.382940) | 0.041188 / 0.075469 (-0.034282) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972520 / 1.841788 (-0.869268) | 11.186297 / 8.074308 (3.111989) | 9.485829 / 10.191392 (-0.705563) | 0.139715 / 0.680424 (-0.540708) | 0.013705 / 0.534201 (-0.520496) | 0.287384 / 0.579283 (-0.291899) | 0.266784 / 0.434364 (-0.167580) | 0.320789 / 0.540337 (-0.219548) | 0.417484 / 1.386936 (-0.969452) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005570 / 0.011353 (-0.005783) | 0.003416 / 0.011008 (-0.007592) | 0.051160 / 0.038508 (0.012652) | 0.031082 / 0.023109 (0.007973) | 0.279336 / 0.275898 (0.003438) | 0.300529 / 0.323480 (-0.022951) | 0.004320 / 0.007986 (-0.003666) | 0.002781 / 0.004328 (-0.001548) | 0.049642 / 0.004250 (0.045391) | 0.044379 / 0.037052 (0.007327) | 0.293797 / 0.258489 (0.035308) | 0.317844 / 0.293841 (0.024003) | 0.049697 / 0.128546 (-0.078849) | 0.010624 / 0.075646 (-0.065023) | 0.058834 / 0.419271 (-0.360437) | 0.033869 / 0.043533 (-0.009664) | 0.280547 / 0.255139 (0.025408) | 0.300685 / 0.283200 (0.017486) | 0.017010 / 0.141683 (-0.124673) | 1.172277 / 1.452155 (-0.279878) | 1.205359 / 1.492716 (-0.287358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074907) | 0.303561 / 0.000490 (0.303071) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.075460 / 0.014526 (0.060934) | 0.085795 / 0.176557 (-0.090762) | 0.124776 / 0.737135 (-0.612360) | 0.088260 / 0.296338 (-0.208079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302873 / 0.215209 (0.087664) | 2.936173 / 2.077655 (0.858519) | 1.589251 / 1.504120 (0.085131) | 1.477552 / 1.541195 (-0.063643) | 1.479322 / 1.468490 (0.010832) | 0.570481 / 4.584777 (-4.014296) | 2.434137 / 3.745712 (-1.311575) | 2.774012 / 5.269862 (-2.495849) | 1.718103 / 4.565676 (-2.847574) | 0.061951 / 0.424275 (-0.362324) | 0.004992 / 0.007607 (-0.002615) | 0.352250 / 0.226044 (0.126205) | 3.457417 / 2.268929 (1.188488) | 1.934587 / 55.444624 (-53.510037) | 1.646904 / 6.876477 (-5.229573) | 1.669429 / 2.142072 (-0.472643) | 0.649665 / 4.805227 (-4.155562) | 0.116630 / 6.500664 (-6.384034) | 0.040669 / 0.075469 (-0.034800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011488 / 1.841788 (-0.830300) | 11.866394 / 8.074308 (3.792086) | 10.144588 / 10.191392 (-0.046804) | 0.129931 / 0.680424 (-0.550493) | 0.014885 / 0.534201 (-0.519316) | 0.287463 / 0.579283 (-0.291821) | 0.280754 / 0.434364 (-0.153610) | 0.330139 / 0.540337 (-0.210199) | 0.414653 / 1.386936 (-0.972283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#585275b8deaebd1bdcbd3725fa63172395791c73 \"CML watermark\")\n"
] | 2024-02-08T10:24:56 | 2024-02-08T13:57:41 | 2024-02-08T13:51:39 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6648",
"html_url": "https://github.com/huggingface/datasets/pull/6648",
"diff_url": "https://github.com/huggingface/datasets/pull/6648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6648.patch",
"merged_at": "2024-02-08T13:51:39"
} | (basically the same content as the hfh upload docs, but adapted for datasets) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6648/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6647/comments | https://api.github.com/repos/huggingface/datasets/issues/6647/events | https://github.com/huggingface/datasets/pull/6647 | 2,123,397,569 | PR_kwDODunzps5mSB2B | 6,647 | Update loading.mdx to include "jsonl" file loading. | {
"login": "mosheber",
"id": 22236370,
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mosheber",
"html_url": "https://github.com/mosheber",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"repos_url": "https://api.github.com/users/mosheber/repos",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it just below, where we present the JSON-Lines example.\r\n> \r\n> * Maybe adding that this format is called JSON-Lines\r\n> * Add the example after the JSON-Lines data example\r\n> \r\n> https://github.com/huggingface/datasets/blob/14d9afbb7ae1b787c450261ca0ff374551993031/docs/source/loading.mdx#L135-L138\r\n\r\nThank you @albertvillanova for the feedback! I moved the jsonl file loading example to a more appropriate location. "
] | 2024-02-07T16:18:08 | 2024-02-08T15:34:17 | null | NONE | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6647",
"html_url": "https://github.com/huggingface/datasets/pull/6647",
"diff_url": "https://github.com/huggingface/datasets/pull/6647.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6647.patch",
"merged_at": null
} | * A small update to the documentation, noting the ability to load jsonl files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6647/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6646/comments | https://api.github.com/repos/huggingface/datasets/issues/6646/events | https://github.com/huggingface/datasets/pull/6646 | 2,123,134,128 | PR_kwDODunzps5mRIma | 6,646 | Better multi-gpu example | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6646). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005598 / 0.011353 (-0.005755) | 0.003640 / 0.011008 (-0.007369) | 0.064557 / 0.038508 (0.026049) | 0.029645 / 0.023109 (0.006536) | 0.243695 / 0.275898 (-0.032203) | 0.261252 / 0.323480 (-0.062228) | 0.004067 / 0.007986 (-0.003919) | 0.002883 / 0.004328 (-0.001446) | 0.049192 / 0.004250 (0.044942) | 0.045299 / 0.037052 (0.008246) | 0.273207 / 0.258489 (0.014718) | 0.288668 / 0.293841 (-0.005173) | 0.028114 / 0.128546 (-0.100432) | 0.010597 / 0.075646 (-0.065049) | 0.215345 / 0.419271 (-0.203927) | 0.036119 / 0.043533 (-0.007414) | 0.243718 / 0.255139 (-0.011421) | 0.266657 / 0.283200 (-0.016543) | 0.018176 / 0.141683 (-0.123507) | 1.127926 / 1.452155 (-0.324229) | 1.168066 / 1.492716 (-0.324650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096001 / 0.018006 (0.077994) | 0.304317 / 0.000490 (0.303828) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018241 / 0.037411 (-0.019170) | 0.061505 / 0.014526 (0.046979) | 0.072456 / 0.176557 (-0.104101) | 0.118315 / 0.737135 (-0.618821) | 0.075154 / 0.296338 (-0.221184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278748 / 0.215209 (0.063538) | 2.729923 / 2.077655 (0.652268) | 1.416835 / 1.504120 (-0.087285) | 1.294016 / 1.541195 (-0.247179) | 1.323249 / 1.468490 (-0.145241) | 0.575389 / 4.584777 (-4.009388) | 2.404923 / 3.745712 (-1.340789) | 2.769233 / 5.269862 (-2.500629) | 1.742340 / 4.565676 (-2.823336) | 0.062664 / 0.424275 (-0.361611) | 0.004951 / 0.007607 (-0.002656) | 0.335024 / 0.226044 (0.108979) | 3.291446 / 2.268929 (1.022518) | 1.797095 / 55.444624 (-53.647530) | 1.532963 / 6.876477 (-5.343513) | 1.529315 / 2.142072 (-0.612758) | 0.654922 / 4.805227 (-4.150305) | 0.118772 / 6.500664 (-6.381892) | 0.042034 / 0.075469 (-0.033435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983646 / 1.841788 (-0.858141) | 11.518625 / 8.074308 (3.444317) | 9.538781 / 10.191392 (-0.652611) | 0.140300 / 0.680424 (-0.540124) | 0.013966 / 0.534201 (-0.520235) | 0.287071 / 0.579283 (-0.292212) | 0.270201 / 0.434364 (-0.164163) | 0.323294 / 0.540337 (-0.217044) | 0.418130 / 1.386936 (-0.968806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005508 / 0.011353 (-0.005844) | 0.003714 / 0.011008 (-0.007294) | 0.050031 / 0.038508 (0.011523) | 0.031866 / 0.023109 (0.008756) | 0.272248 / 0.275898 (-0.003650) | 0.295105 / 0.323480 (-0.028375) | 0.005179 / 0.007986 (-0.002807) | 0.002820 / 0.004328 (-0.001508) | 0.048896 / 0.004250 (0.044646) | 0.045975 / 0.037052 (0.008922) | 0.287662 / 0.258489 (0.029173) | 0.321139 / 0.293841 (0.027298) | 0.049242 / 0.128546 (-0.079304) | 0.010732 / 0.075646 (-0.064914) | 0.057943 / 0.419271 (-0.361328) | 0.033527 / 0.043533 (-0.010006) | 0.271746 / 0.255139 (0.016607) | 0.291404 / 0.283200 (0.008204) | 0.019351 / 0.141683 (-0.122332) | 1.157221 / 1.452155 (-0.294934) | 1.215757 / 1.492716 (-0.276959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096950 / 0.018006 (0.078944) | 0.312002 / 0.000490 (0.311512) | 0.000223 / 0.000200 (0.000023) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022288 / 0.037411 (-0.015123) | 0.075282 / 0.014526 (0.060756) | 0.087445 / 0.176557 (-0.089112) | 0.125617 / 0.737135 (-0.611519) | 0.088878 / 0.296338 (-0.207460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291961 / 0.215209 (0.076752) | 2.881445 / 2.077655 (0.803790) | 1.586128 / 1.504120 (0.082008) | 1.458636 / 1.541195 (-0.082558) | 1.487001 / 1.468490 (0.018511) | 0.575466 / 4.584777 (-4.009311) | 2.454941 / 3.745712 (-1.290771) | 2.878077 / 5.269862 (-2.391785) | 1.787215 / 4.565676 (-2.778462) | 0.064010 / 0.424275 (-0.360265) | 0.005092 / 0.007607 (-0.002516) | 0.360500 / 0.226044 (0.134455) | 3.465574 / 2.268929 (1.196646) | 1.957516 / 55.444624 (-53.487108) | 1.666282 / 6.876477 (-5.210195) | 1.690070 / 2.142072 (-0.452002) | 0.661323 / 4.805227 (-4.143905) | 0.117824 / 6.500664 (-6.382840) | 0.042286 / 0.075469 (-0.033183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026517 / 1.841788 (-0.815270) | 12.083347 / 8.074308 (4.009039) | 10.269319 / 10.191392 (0.077927) | 0.139253 / 0.680424 (-0.541171) | 0.016258 / 0.534201 (-0.517943) | 0.290583 / 0.579283 (-0.288700) | 0.284338 / 0.434364 (-0.150026) | 0.335865 / 0.540337 (-0.204473) | 0.416600 / 1.386936 (-0.970336) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba3cfad91e9366cda0ba203700fc745d8bcd1f17 \"CML watermark\")\n",
"Thanks, I was needing this example today <3 "
] | 2024-02-07T14:15:01 | 2024-02-09T17:43:32 | 2024-02-07T14:59:11 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6646",
"html_url": "https://github.com/huggingface/datasets/pull/6646",
"diff_url": "https://github.com/huggingface/datasets/pull/6646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6646.patch",
"merged_at": "2024-02-07T14:59:11"
} | Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU
the previous example was using a model for translation and the way it was setup was not really the right way to use the model. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6646/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6645/comments | https://api.github.com/repos/huggingface/datasets/issues/6645/events | https://github.com/huggingface/datasets/issues/6645 | 2,122,956,818 | I_kwDODunzps5-icAS | 6,645 | Support fsspec 2024.2 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2024-02-07T12:45:29 | 2024-02-07T12:46:05 | null | MEMBER | null | null | Support fsspec 2024.2.
First, we should address:
- #6644 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6645/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6644/comments | https://api.github.com/repos/huggingface/datasets/issues/6644/events | https://github.com/huggingface/datasets/issues/6644 | 2,122,955,282 | I_kwDODunzps5-iboS | 6,644 | Support fsspec 2023.12 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2024-02-07T12:44:39 | 2024-02-07T12:45:19 | null | MEMBER | null | null | Support fsspec 2023.12 by handling previous and new glob behavior. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6644/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6643/comments | https://api.github.com/repos/huggingface/datasets/issues/6643/events | https://github.com/huggingface/datasets/issues/6643 | 2,121,239,039 | I_kwDODunzps5-b4n_ | 6,643 | Faiss GPU index cannot be serialised when passed to trainer | {
"login": "rubenweitzman",
"id": 56388976,
"node_id": "MDQ6VXNlcjU2Mzg4OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/56388976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rubenweitzman",
"html_url": "https://github.com/rubenweitzman",
"followers_url": "https://api.github.com/users/rubenweitzman/followers",
"following_url": "https://api.github.com/users/rubenweitzman/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenweitzman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rubenweitzman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenweitzman/subscriptions",
"organizations_url": "https://api.github.com/users/rubenweitzman/orgs",
"repos_url": "https://api.github.com/users/rubenweitzman/repos",
"events_url": "https://api.github.com/users/rubenweitzman/events{/privacy}",
"received_events_url": "https://api.github.com/users/rubenweitzman/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)",
"Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove the faiss index, as I would want to use it to create batches of retrieved samples from the dataset. \r\nThanks in advance for your help!"
] | 2024-02-06T16:41:00 | 2024-02-09T18:40:14 | null | NONE | null | null | ### Describe the bug
I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration:
1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error:
```
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in _inner_training_loop
train_dataloader = self.get_train_dataloader()
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 831, in get_train_dataloader
train_dataset = self._remove_unused_columns(train_dataset, description="training")
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 725, in _remove_unused_columns
return dataset.remove_columns(ignored_columns)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/fingerprint.py", line 481, in wrapper
out = func(dataset, *args, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2146, in remove_columns
dataset = copy.deepcopy(self)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 161, in deepcopy
rv = reductor(4)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 556, in index_getstate
return {"this": serialize_index(self).tobytes()}
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 1607, in serialize_index
write_index(index, writer)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/swigfaiss.py", line 9843, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /project/faiss/faiss/impl/index_write.cpp:590: don't know how to serialize this type of index
```
The index was created with the add_faiss_index method
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
```
2. Athough faiss is written to be compatible on the gpu for searching [https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU) I am getting error when trying to use the hugggingface code to do the search on gpu. This seems to be caused by this line https://github.com/huggingface/datasets/blob/f9975f636542df7f95c27065ea93147440d690b7/src/datasets/search.py#L376 producing error
```
total_scores, total_examples = self.dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 773, in get_nearest_examples_batch
total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 727, in search_batch
return self._indexes[index_name].search_batch(queries, k, **kwargs)
File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 376, in search_batch
if not queries.flags.c_contiguous:
AttributeError: 'Tensor' object has no attribute 'flags'
```
### Steps to reproduce the bug
```
train_dataset.add_faiss_index(
column='embeddings',
index_name='embeddings',
string_factory=faiss_index_string,
train_size=config.faiss_train_size,
device=0, # Use -1 for CPU, or specify GPU device ID
faiss_verbose=True
)
Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
tokenizer=tokenizer
)
train_dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k)
```
### Expected behavior
I would expect the faiss database code to be gpu compatible
### Environment info
huggingface Version: 2.16.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6643/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6642/comments | https://api.github.com/repos/huggingface/datasets/issues/6642/events | https://github.com/huggingface/datasets/issues/6642 | 2,119,085,766 | I_kwDODunzps5-Tq7G | 6,642 | Differently dataset object saved than it is loaded. | {
"login": "MFajcik",
"id": 31218150,
"node_id": "MDQ6VXNlcjMxMjE4MTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/31218150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFajcik",
"html_url": "https://github.com/MFajcik",
"followers_url": "https://api.github.com/users/MFajcik/followers",
"following_url": "https://api.github.com/users/MFajcik/following{/other_user}",
"gists_url": "https://api.github.com/users/MFajcik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFajcik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFajcik/subscriptions",
"organizations_url": "https://api.github.com/users/MFajcik/orgs",
"repos_url": "https://api.github.com/users/MFajcik/repos",
"events_url": "https://api.github.com/users/MFajcik/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFajcik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` compatible dataset in a following way. I created a directory, and just copied jsonl there as `train.jsonl/test.jsonl`.\r\n```python\r\noutput_folder = os.path.join(args.output_folder, f\"{task_meta_type}_{task_type}\")\r\nos.makedirs(output_folder, exist_ok=True)\r\nfile = f\"{task_meta_type}_{task_type}_train.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"train.jsonl\"))\r\n# now test\r\nfile = f\"{task_meta_type}_{task_type}_test.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"test.jsonl\"))\r\n```\r\n",
"Hi @MFajcik, \r\n\r\nYou can find information about save_to_disk/load_from_disk in our docs:\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/process#save\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.save_to_disk\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.load_from_disk"
] | 2024-02-05T17:28:57 | 2024-02-06T09:50:19 | 2024-02-06T09:50:19 | NONE | null | null | ### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"),
"test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")})
print(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
```
this yields output
```
.data/hf_dataset/propaganda_zanr
Length of train dataset: 7642
Length of test dataset: 1000
```
Everything looks fine.
Then I load the dataset
```python
from datasets import load_dataset
dataset_path = ".data/hf_dataset/propaganda_zanr"
dataset = load_dataset(dataset_path)
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
```
this prints
```
Generating train split: 1 examples [00:00, 72.10 examples/s]
Generating test split: 1 examples [00:00, 100.69 examples/s]
Length of train dataset: 1
Length of test dataset: 1
```
I dont' understand :(
### Expected behavior
same object is loaded
### Environment info
datasets==2.16.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6642/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6641/comments | https://api.github.com/repos/huggingface/datasets/issues/6641/events | https://github.com/huggingface/datasets/issues/6641 | 2,116,963,132 | I_kwDODunzps5-Lks8 | 6,641 | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | {
"login": "Hughhuh",
"id": 109789057,
"node_id": "U_kgDOBos_gQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109789057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hughhuh",
"html_url": "https://github.com/Hughhuh",
"followers_url": "https://api.github.com/users/Hughhuh/followers",
"following_url": "https://api.github.com/users/Hughhuh/following{/other_user}",
"gists_url": "https://api.github.com/users/Hughhuh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hughhuh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hughhuh/subscriptions",
"organizations_url": "https://api.github.com/users/Hughhuh/orgs",
"repos_url": "https://api.github.com/users/Hughhuh/repos",
"events_url": "https://api.github.com/users/Hughhuh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hughhuh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the information you provided, it seems an issue with the specific \"samsum\" dataset. I'm transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/samsum/discussions/5"
] | 2024-02-04T08:49:31 | 2024-02-06T09:26:07 | 2024-02-06T09:11:45 | NONE | null | null | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
Resolving data files: 100%
159/159 [00:00<00:00, 9909.28it/s]
Using custom data configuration samsum-0b1209637541c9e6
Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%
3/3 [00:00<00:00, 119.99it/s]
Extracting data files: 100%
3/3 [00:00<00:00, 9.54it/s]
Generating train split:
88392/0 [00:15<00:00, 86848.17 examples/s]
Generating test split:
0/0 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files)
131 try:
--> 132 pa_table = paj.read_json(
133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
134 )
135 break
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: JSON parse error: Invalid value. in row 0
During handling of the above exception, another exception occurred:
UnicodeDecodeError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1818 _time = time.time()
-> 1819 for _, table in generator:
1820 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files)
152 with open(file, encoding="utf-8") as f:
--> 153 dataset = json.load(f)
154 except json.JSONDecodeError:
File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
277 a JSON document) to a Python object.
278
(...)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[81], line 5
1 from datasets import load_dataset
3 # Load dataset from the hub
4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data")
----> 5 dataset = load_dataset('json',"samsum")
6 #dataset = load_dataset("samsum")
7 print(f"Train dataset size: {len(dataset['train'])}")
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1757 # Download and prepare data
-> 1758 builder_instance.download_and_prepare(
1759 download_config=download_config,
1760 download_mode=download_mode,
1761 ignore_verifications=ignore_verifications,
1762 try_from_hf_gcs=try_from_hf_gcs,
1763 num_proc=num_proc,
1764 )
1766 # Build dataset for splits
1767 keep_in_memory = (
1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1769 )
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
858 if num_proc is not None:
859 prepare_split_kwargs["num_proc"] = num_proc
--> 860 self._download_and_prepare(
861 dl_manager=dl_manager,
862 verify_infos=verify_infos,
863 **prepare_split_kwargs,
864 **download_and_prepare_kwargs,
865 )
866 # Sync info
867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
949 split_dict.add(split_generator.split_info)
951 try:
952 # Prepare split will record examples associated to the split
--> 953 self._prepare_split(split_generator, **prepare_split_kwargs)
954 except OSError as e:
955 raise OSError(
956 "Cannot find data file. "
957 + (self.manual_download_instructions or "")
958 + "\nOriginal error:\n"
959 + str(e)
960 ) from None
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1706 gen_kwargs = split_generator.gen_kwargs
1707 job_id = 0
-> 1708 for job_id, done, content in self._prepare_split_single(
1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1710 ):
1711 if done:
1712 result = content
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1850 e = e.__context__
-> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
can't load dataset
### Environment info
dataset:samsum
system :win10
gpu:m40 24G | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6641/timeline | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/6640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6640/comments | https://api.github.com/repos/huggingface/datasets/issues/6640/events | https://github.com/huggingface/datasets/issues/6640 | 2,115,864,531 | I_kwDODunzps5-HYfT | 6,640 | Sign Language Support | {
"login": "Merterm",
"id": 6684795,
"node_id": "MDQ6VXNlcjY2ODQ3OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6684795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Merterm",
"html_url": "https://github.com/Merterm",
"followers_url": "https://api.github.com/users/Merterm/followers",
"following_url": "https://api.github.com/users/Merterm/following{/other_user}",
"gists_url": "https://api.github.com/users/Merterm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Merterm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Merterm/subscriptions",
"organizations_url": "https://api.github.com/users/Merterm/orgs",
"repos_url": "https://api.github.com/users/Merterm/repos",
"events_url": "https://api.github.com/users/Merterm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Merterm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2024-02-02T21:54:51 | 2024-02-02T21:54:51 | null | NONE | null | null | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.)
### Your contribution
I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6640/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6639/comments | https://api.github.com/repos/huggingface/datasets/issues/6639/events | https://github.com/huggingface/datasets/pull/6639 | 2,114,620,200 | PR_kwDODunzps5l0KPG | 6,639 | Run download_and_prepare if missing splits | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-02T10:36:49 | 2024-02-06T16:54:22 | null | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6639",
"html_url": "https://github.com/huggingface/datasets/pull/6639",
"diff_url": "https://github.com/huggingface/datasets/pull/6639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6639.patch",
"merged_at": null
} | A first step towards https://github.com/huggingface/datasets/issues/6529 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6639/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6638/comments | https://api.github.com/repos/huggingface/datasets/issues/6638/events | https://github.com/huggingface/datasets/issues/6638 | 2,113,329,257 | I_kwDODunzps599thp | 6,638 | Cannot download wmt16 dataset | {
"login": "vidyasiv",
"id": 81709031,
"node_id": "MDQ6VXNlcjgxNzA5MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/81709031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vidyasiv",
"html_url": "https://github.com/vidyasiv",
"followers_url": "https://api.github.com/users/vidyasiv/followers",
"following_url": "https://api.github.com/users/vidyasiv/following{/other_user}",
"gists_url": "https://api.github.com/users/vidyasiv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vidyasiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vidyasiv/subscriptions",
"organizations_url": "https://api.github.com/users/vidyasiv/orgs",
"repos_url": "https://api.github.com/users/vidyasiv/repos",
"events_url": "https://api.github.com/users/vidyasiv/events{/privacy}",
"received_events_url": "https://api.github.com/users/vidyasiv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\n```\r\n\r\nCould you explain which is the minimum version that fixes this?\r\nEdit: Looks like that's 2.16.0, will close out issue"
] | 2024-02-01T19:41:42 | 2024-02-01T20:07:29 | 2024-02-01T20:07:29 | NONE | null | null | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "test.py", line 2, in <module>
raw_datasets = load_dataset("wmt16","ro-en",split="train")
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2153, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1717, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1027, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/wmt_utils.py", line 754, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 565, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 428, in download
downloaded_path_or_paths = map_nested(
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 464, in map_nested
mapped = [
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 465, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 367, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 454, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 182, in cached_path
output_path = get_from_cache(
File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 596, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
```
### Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("wmt16","ro-en",split="train")
```
### Expected behavior
Expect the dataset to be downloaded/ at least a clean exit with error explaining dataset is missing and a suggestion for next steps
### Environment info
- `datasets` version: 2.14.7
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.17.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6638/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6637/comments | https://api.github.com/repos/huggingface/datasets/issues/6637/events | https://github.com/huggingface/datasets/issues/6637 | 2,113,025,975 | I_kwDODunzps598je3 | 6,637 | 'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets | {
"login": "tobycrisford",
"id": 22883190,
"node_id": "MDQ6VXNlcjIyODgzMTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/22883190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tobycrisford",
"html_url": "https://github.com/tobycrisford",
"followers_url": "https://api.github.com/users/tobycrisford/followers",
"following_url": "https://api.github.com/users/tobycrisford/following{/other_user}",
"gists_url": "https://api.github.com/users/tobycrisford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tobycrisford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobycrisford/subscriptions",
"organizations_url": "https://api.github.com/users/tobycrisford/orgs",
"repos_url": "https://api.github.com/users/tobycrisford/repos",
"events_url": "https://api.github.com/users/tobycrisford/events{/privacy}",
"received_events_url": "https://api.github.com/users/tobycrisford/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `BufferShuffledExamplesIterable.iter_arrow()` (same as regular `BufferShuffledExamplesIterable.__iter__()` but yields Arrow tables)\r\n\r\nhttps://github.com/huggingface/datasets/blob/b7d854b7fd3e9a330e21b76ee8421d4a7ebb4a7a/src/datasets/iterable_dataset.py#L968-L974\r\n"
] | 2024-02-01T17:16:54 | 2024-02-05T10:43:47 | null | NONE | null | null | ### Describe the bug
If you:
1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset
2. Set the output format to torch tensors with .with_format('torch')
Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch formatting.
### Steps to reproduce the bug
```python
import datasets
import torch
from tqdm import tqdm
rand_a = torch.randn(3,224,224)
rand_b = torch.randn(3,224,224)
a = torch.stack([rand_a] * 1000)
b = torch.stack([rand_b] * 1000)
features = datasets.Features({"tensor": datasets.Array3D(shape=(3,224,224), dtype="float32")})
ds_a = datasets.Dataset.from_dict({"tensor": a}, features=features).to_iterable_dataset()
ds_b = datasets.Dataset.from_dict({"tensor": b}, features=features).to_iterable_dataset()
# Iterating through either dataset with torch formatting is really fast (2000it/s on my machine)
for example in tqdm(ds_a.with_format('torch')):
pass
# Iterating through either dataset shuffled is also pretty fast (100it/s on my machine)
for example in tqdm(ds_a.shuffle()):
pass
# Iterating through this interleaved dataset is pretty fast (200it/s on my machine)
ds_fast = datasets.interleave_datasets([ds_a, ds_b])
for example in tqdm(ds_fast):
pass
# Iterating through either dataset with torch formatting *after shuffling* is really slow... (<2it/s on my machine)
for example in tqdm(ds_a.shuffle().with_format('torch')):
pass
# Iterating through this torch formatted interleaved dataset is also really slow (<2it/s on my machine)...
ds_slow = datasets.interleave_datasets([ds_a, ds_b]).with_format('torch')
for example in tqdm(ds_slow):
pass
# Even doing this is way faster!! (70it/s on my machine)
for example in tqdm(ds_fast):
test = torch.tensor(example['tensor'])
```
### Expected behavior
Applying torch formatting to the interleaved dataset shouldn't increase the time taken to iterate through the dataset by very much, since even explicitly converting every example is over 70x faster than calling .with_format('torch').
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.38
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6637/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6637/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6636/comments | https://api.github.com/repos/huggingface/datasets/issues/6636/events | https://github.com/huggingface/datasets/pull/6636 | 2,110,781,097 | PR_kwDODunzps5lm4zI | 6,636 | Faster column validation and reordering | {
"login": "psmyth94",
"id": 11325244,
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psmyth94",
"html_url": "https://github.com/psmyth94",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://huggingface.co/docs/datasets/pr_6636). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @mariosasko, I made the changes. However, I did some tests with `map` and I still saw that it took ~3.5 minutes per batch on 6000 features when using `dataset.map(lambda x: x, batched=True)`. From the profile, the culprits were mainly with `ArrowWriter.write_batch` and `ArrowWriter._build_writer`. The slow down from `_build_writer` is due to updating existing features with the inferred ones. I don't think this can be optimized any further, but fortunately, I can avoid this by setting the `features` in `map`. On the other hand, `write_batch` selects cols based on intersection and difference between schema names and example keys using two for loops. The same exists in `ArrowWriter.write_examples_on_file`. Optimizing the column selection using set operations effectively brings it from 3.5 minutes per batch down to 6 seconds per batch. Can we add these changes along with this PR?\r\n\r\nEdit: Ah just realized you can avoid the issue with inferring features altogether when you set the format to arrow (or pandas).",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004990 / 0.011353 (-0.006363) | 0.003138 / 0.011008 (-0.007870) | 0.062368 / 0.038508 (0.023860) | 0.028634 / 0.023109 (0.005524) | 0.241297 / 0.275898 (-0.034601) | 0.264433 / 0.323480 (-0.059047) | 0.003133 / 0.007986 (-0.004852) | 0.003444 / 0.004328 (-0.000885) | 0.048522 / 0.004250 (0.044271) | 0.043700 / 0.037052 (0.006648) | 0.257054 / 0.258489 (-0.001435) | 0.277551 / 0.293841 (-0.016290) | 0.027132 / 0.128546 (-0.101414) | 0.010395 / 0.075646 (-0.065251) | 0.208003 / 0.419271 (-0.211269) | 0.035814 / 0.043533 (-0.007719) | 0.250098 / 0.255139 (-0.005041) | 0.266726 / 0.283200 (-0.016474) | 0.018424 / 0.141683 (-0.123259) | 1.129242 / 1.452155 (-0.322912) | 1.167674 / 1.492716 (-0.325042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091808 / 0.018006 (0.073802) | 0.298726 / 0.000490 (0.298236) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019119 / 0.037411 (-0.018292) | 0.061969 / 0.014526 (0.047443) | 0.073392 / 0.176557 (-0.103165) | 0.119460 / 0.737135 (-0.617675) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281435 / 0.215209 (0.066226) | 2.702094 / 2.077655 (0.624439) | 1.411541 / 1.504120 (-0.092579) | 1.284084 / 1.541195 (-0.257111) | 1.302638 / 1.468490 (-0.165852) | 0.562420 / 4.584777 (-4.022357) | 2.364890 / 3.745712 (-1.380822) | 2.744033 / 5.269862 (-2.525828) | 1.699000 / 4.565676 (-2.866677) | 0.062315 / 0.424275 (-0.361961) | 0.004982 / 0.007607 (-0.002625) | 0.334385 / 0.226044 (0.108341) | 3.203268 / 2.268929 (0.934339) | 1.766998 / 55.444624 (-53.677627) | 1.497164 / 6.876477 (-5.379313) | 1.509996 / 2.142072 (-0.632077) | 0.633014 / 4.805227 (-4.172213) | 0.115317 / 6.500664 (-6.385347) | 0.041120 / 0.075469 (-0.034349) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965877 / 1.841788 (-0.875911) | 11.219909 / 8.074308 (3.145601) | 9.333822 / 10.191392 (-0.857570) | 0.136482 / 0.680424 (-0.543941) | 0.013632 / 0.534201 (-0.520569) | 0.287251 / 0.579283 (-0.292032) | 0.262786 / 0.434364 (-0.171578) | 0.322893 / 0.540337 (-0.217444) | 0.418180 / 1.386936 (-0.968756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005444 / 0.011353 (-0.005909) | 0.003147 / 0.011008 (-0.007862) | 0.049242 / 0.038508 (0.010734) | 0.030944 / 0.023109 (0.007834) | 0.281901 / 0.275898 (0.006003) | 0.303820 / 0.323480 (-0.019660) | 0.004326 / 0.007986 (-0.003659) | 0.002696 / 0.004328 (-0.001632) | 0.048306 / 0.004250 (0.044055) | 0.044145 / 0.037052 (0.007093) | 0.297253 / 0.258489 (0.038764) | 0.324062 / 0.293841 (0.030221) | 0.046724 / 0.128546 (-0.081823) | 0.010079 / 0.075646 (-0.065567) | 0.057635 / 0.419271 (-0.361636) | 0.033621 / 0.043533 (-0.009912) | 0.282303 / 0.255139 (0.027164) | 0.300761 / 0.283200 (0.017561) | 0.017116 / 0.141683 (-0.124567) | 1.156519 / 1.452155 (-0.295636) | 1.216087 / 1.492716 (-0.276630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093011 / 0.018006 (0.075005) | 0.301310 / 0.000490 (0.300820) | 0.000223 / 0.000200 (0.000023) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023112 / 0.037411 (-0.014299) | 0.075192 / 0.014526 (0.060666) | 0.086213 / 0.176557 (-0.090343) | 0.125853 / 0.737135 (-0.611282) | 0.087754 / 0.296338 (-0.208585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301095 / 0.215209 (0.085886) | 2.911769 / 2.077655 (0.834114) | 1.614708 / 1.504120 (0.110588) | 1.494497 / 1.541195 (-0.046698) | 1.506978 / 1.468490 (0.038488) | 0.572743 / 4.584777 (-4.012034) | 2.417142 / 3.745712 (-1.328570) | 2.755338 / 5.269862 (-2.514523) | 1.711026 / 4.565676 (-2.854650) | 0.062732 / 0.424275 (-0.361543) | 0.005031 / 0.007607 (-0.002576) | 0.352343 / 0.226044 (0.126298) | 3.465183 / 2.268929 (1.196255) | 1.958795 / 55.444624 (-53.485829) | 1.682239 / 6.876477 (-5.194238) | 1.688897 / 2.142072 (-0.453176) | 0.643311 / 4.805227 (-4.161916) | 0.115426 / 6.500664 (-6.385238) | 0.040338 / 0.075469 (-0.035131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005322 / 1.841788 (-0.836466) | 11.779380 / 8.074308 (3.705072) | 10.041574 / 10.191392 (-0.149818) | 0.127617 / 0.680424 (-0.552807) | 0.015840 / 0.534201 (-0.518361) | 0.286905 / 0.579283 (-0.292378) | 0.275180 / 0.434364 (-0.159183) | 0.332498 / 0.540337 (-0.207840) | 0.410719 / 1.386936 (-0.976217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32b206d47f582380f9c64578dcfa6c48252db3b8 \"CML watermark\")\n"
] | 2024-01-31T19:08:28 | 2024-02-07T19:39:00 | 2024-02-06T23:03:38 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6636",
"html_url": "https://github.com/huggingface/datasets/pull/6636",
"diff_url": "https://github.com/huggingface/datasets/pull/6636.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6636.patch",
"merged_at": "2024-02-06T23:03:38"
} | I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit is when the following check is performed: `any(col not in self._data.column_names for col in columns)`. Replacing this by `set(columns) - (self._data.column_names)` is more efficient. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6636/timeline | null | true |
End of preview. Expand
in Dataset Viewer.
annotations_creators:
- no-annotation language:
- en language_creators:
- found license:
- unknown multilinguality:
- monolingual pretty_name: Hugging Face Datasets Github Issues size_categories:
- unknown source_datasets:
- original tags:
- github
- github-issues
- datasets
- huggingface task_categories:
- text-classification
- text-retrieval task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval
- Downloads last month
- 149