id
int64 959M
2.55B
| title
stringlengths 3
133
| body
stringlengths 1
65.5k
โ | description
stringlengths 5
65.6k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | user
stringclasses 174
values |
---|---|---|---|---|---|---|---|---|
2,550,841,506 | Update datasets to 3.0.1 | Update datasets to 3.0.1.
Fix #3068. | Update datasets to 3.0.1: Update datasets to 3.0.1.
Fix #3068. | open | 2024-09-26T14:53:51Z | 2024-09-26T16:21:52Z | null | albertvillanova |
2,550,678,625 | Fix CI by deleting test_polars_struct_thread_panic_error | Delete `test_polars_struct_thread_panic_error`.
Note that the previous CI-Hub dataset (with the specific Parquet file) has been deleted.
Fix #3069. | Fix CI by deleting test_polars_struct_thread_panic_error: Delete `test_polars_struct_thread_panic_error`.
Note that the previous CI-Hub dataset (with the specific Parquet file) has been deleted.
Fix #3069. | closed | 2024-09-26T13:59:44Z | 2024-09-26T15:23:33Z | 2024-09-26T15:23:31Z | albertvillanova |
2,550,490,787 | CI worker test_polars_struct_thread_panic_error is broken | NOW:
CI worker test `test_polars_struct_thread_panic_error` raises `RepositoryNotFoundError`: https://github.com/huggingface/dataset-viewer/actions/runs/11053126786/job/30706767804?pr=3037
```
ERROR tests/job_runners/split/test_descriptive_statistics.py::test_polars_struct_thread_panic_error - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-66f560ea-0d3bd15934a45d7838ac6bb3;f1b1e4ce-2768-4f41-a5d8-f1c1c129addc)
Repository Not Found for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test_polars_panic_error/resolve/main/test_polars_panic_error.parquet.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
```
Direct cause is that the dataset has been deleted from the CI Hub: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test_polars_panic_error
BEFORE:
~~CI worker test `test_polars_struct_thread_panic_error` raises `LocalEntryNotFoundError`: https://github.com/huggingface/dataset-viewer/actions/runs/11052280214/job/30704011957~~
```
ERROR tests/job_runners/split/test_descriptive_statistics.py::test_polars_struct_thread_panic_error - huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
```
~~Direct cause is a 500 server error:~~
```
E huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test_polars_panic_error/resolve/main/test_polars_panic_error.parquet (Request ID: Root=1-66f555d0-2480d6c65b27b8e7047e6f9c;093b5eeb-e881-4c93-bafd-12965f704b66)
E
E Cannot read properties of undefined (reading 'cloudfront') (internal error hidden in production)
``` | CI worker test_polars_struct_thread_panic_error is broken: NOW:
CI worker test `test_polars_struct_thread_panic_error` raises `RepositoryNotFoundError`: https://github.com/huggingface/dataset-viewer/actions/runs/11053126786/job/30706767804?pr=3037
```
ERROR tests/job_runners/split/test_descriptive_statistics.py::test_polars_struct_thread_panic_error - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-66f560ea-0d3bd15934a45d7838ac6bb3;f1b1e4ce-2768-4f41-a5d8-f1c1c129addc)
Repository Not Found for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test_polars_panic_error/resolve/main/test_polars_panic_error.parquet.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
```
Direct cause is that the dataset has been deleted from the CI Hub: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test_polars_panic_error
BEFORE:
~~CI worker test `test_polars_struct_thread_panic_error` raises `LocalEntryNotFoundError`: https://github.com/huggingface/dataset-viewer/actions/runs/11052280214/job/30704011957~~
```
ERROR tests/job_runners/split/test_descriptive_statistics.py::test_polars_struct_thread_panic_error - huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
```
~~Direct cause is a 500 server error:~~
```
E huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test_polars_panic_error/resolve/main/test_polars_panic_error.parquet (Request ID: Root=1-66f555d0-2480d6c65b27b8e7047e6f9c;093b5eeb-e881-4c93-bafd-12965f704b66)
E
E Cannot read properties of undefined (reading 'cloudfront') (internal error hidden in production)
``` | closed | 2024-09-26T12:50:09Z | 2024-09-26T15:23:32Z | 2024-09-26T15:23:32Z | albertvillanova |
2,549,939,957 | Update datasets to 3.0.1 | Release: https://github.com/huggingface/datasets/releases/tag/3.0.1
This should fix issues with missing fields in JSON-lines datasets. | Update datasets to 3.0.1: Release: https://github.com/huggingface/datasets/releases/tag/3.0.1
This should fix issues with missing fields in JSON-lines datasets. | open | 2024-09-26T08:50:12Z | 2024-09-26T08:50:12Z | null | albertvillanova |
2,543,318,612 | fine grained token for admin | require fine grained token with repo.write permission to access the admin endpoints | fine grained token for admin: require fine grained token with repo.write permission to access the admin endpoints | closed | 2024-09-23T18:09:11Z | 2024-09-24T09:46:56Z | 2024-09-24T09:46:54Z | lhoestq |
2,541,131,118 | `sabareesh88/FNSPID_external` and `sabareesh88/FNSPID_nasdaq` datasets not converting to duckdb | [User is reporting a few datasets ](https://discord.com/channels/879548962464493619/1286797561986023464/1286797561986023464)where the Search / SQL Console isn't showing (_because of job failing with duckdb branch_)
- https://huggingface.co/datasets/sabareesh88/FNSPID_nasdaq
- https://huggingface.co/datasets/sabareesh88/FNSPID_external
I tried refreshing the job and the dataset, but still getting `Job manager crashed while running this job (missing heartbeats).` for `split-duckdb-index`. | `sabareesh88/FNSPID_external` and `sabareesh88/FNSPID_nasdaq` datasets not converting to duckdb: [User is reporting a few datasets ](https://discord.com/channels/879548962464493619/1286797561986023464/1286797561986023464)where the Search / SQL Console isn't showing (_because of job failing with duckdb branch_)
- https://huggingface.co/datasets/sabareesh88/FNSPID_nasdaq
- https://huggingface.co/datasets/sabareesh88/FNSPID_external
I tried refreshing the job and the dataset, but still getting `Job manager crashed while running this job (missing heartbeats).` for `split-duckdb-index`. | closed | 2024-09-22T15:29:18Z | 2024-09-22T15:31:08Z | 2024-09-22T15:31:08Z | cfahlgren1 |
2,538,934,712 | Update cryptography >=43.0.1 | Fix for https://github.com/huggingface/dataset-viewer/security/dependabot/610 | Update cryptography >=43.0.1: Fix for https://github.com/huggingface/dataset-viewer/security/dependabot/610 | closed | 2024-09-20T14:11:20Z | 2024-09-20T17:59:13Z | 2024-09-20T17:59:11Z | AndreaFrancis |
2,533,155,513 | Upgrade huggingface_hub to 0.25.0 | https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0
| Upgrade huggingface_hub to 0.25.0: https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0
| open | 2024-09-18T09:09:12Z | 2024-09-18T09:09:58Z | null | severo |
2,533,153,387 | Simplify test code where a dataset is set as gated | [[email protected]](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.
We had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/jobs/cache_maintenance/tests/utils.py#L41
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/admin/tests/fixtures/hub.py#L24
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/worker/tests/fixtures/hub.py#L35 | Simplify test code where a dataset is set as gated: [[email protected]](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.
We had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/jobs/cache_maintenance/tests/utils.py#L41
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/admin/tests/fixtures/hub.py#L24
https://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/worker/tests/fixtures/hub.py#L35 | open | 2024-09-18T09:08:14Z | 2024-09-18T09:09:31Z | null | severo |
2,531,125,441 | update developer guide | null | update developer guide: | closed | 2024-09-17T13:09:25Z | 2024-09-20T13:40:54Z | 2024-09-20T13:40:52Z | severo |
2,525,474,056 | fix(chart): block admin metrics exposition | null | fix(chart): block admin metrics exposition: | closed | 2024-09-13T18:58:34Z | 2024-09-18T05:02:10Z | 2024-09-18T05:02:08Z | rtrompier |
2,521,081,448 | add documentation page for api playground | add api playground to docs
_the playground doesn't support auth yet, but plan to add soon_
_**preview of docs page below**_
# Dataset Viewer API Playground
The [API Playground](https://huggingface.co/spaces/cfahlgren1/datasets-api-playground) is a space that allows you to make requests to the Dataset Viewer API and visualize the results in real-time. It's a great way to explore the capabilities of the API and test different queries.
<div class="flex justify-center">
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/datasets-api-playground.png"
alt="Dataset API Playground"
/>
</a>
</div>
| add documentation page for api playground: add api playground to docs
_the playground doesn't support auth yet, but plan to add soon_
_**preview of docs page below**_
# Dataset Viewer API Playground
The [API Playground](https://huggingface.co/spaces/cfahlgren1/datasets-api-playground) is a space that allows you to make requests to the Dataset Viewer API and visualize the results in real-time. It's a great way to explore the capabilities of the API and test different queries.
<div class="flex justify-center">
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/datasets-api-playground.png"
alt="Dataset API Playground"
/>
</a>
</div>
| open | 2024-09-12T00:52:05Z | 2024-09-12T14:48:17Z | null | cfahlgren1 |
2,513,554,933 | Add relaion2b dataset opt-out results | previously we had the laion2b results, but laion re-released the dataset with an improved filtering based on safety | Add relaion2b dataset opt-out results: previously we had the laion2b results, but laion re-released the dataset with an improved filtering based on safety | closed | 2024-09-09T10:23:54Z | 2024-09-09T10:37:33Z | 2024-09-09T10:37:31Z | lhoestq |
2,507,235,555 | Include splits in HF Croissant definitions. | null | Include splits in HF Croissant definitions.: | open | 2024-09-05T09:16:08Z | 2024-09-26T08:23:15Z | null | ccl-core |
2,504,025,093 | Bump cryptography from 42.0.4 to 43.0.1 in /libs/libcommon in the pip group across 1 directory | [//]: # (dependabot-start)
โ ๏ธ **Dependabot is rebasing this PR** โ ๏ธ
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps the pip group with 1 update in the /libs/libcommon directory: [cryptography](https://github.com/pyca/cryptography).
Updates `cryptography` from 42.0.4 to 43.0.1
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>43.0.1 - 2024-09-03</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.3.2.
<p>.. _v43-0-0:</p>
<p>43.0.0 - 2024-07-20<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for OpenSSL less than 1.1.1e has been
removed. Users on older version of OpenSSL will need to upgrade.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.8.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.3.1.</li>
<li>Updated the minimum supported Rust version (MSRV) to 1.65.0, from 1.63.0.</li>
<li>:func:<code>~cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key</code>
now enforces a minimum RSA key size of 1024-bit. Note that 1024-bit is still
considered insecure, users should generally use a key size of 2048-bits.</li>
<li>:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.serialize_certificates</code>
now emits ASN.1 that more closely follows the recommendations in :rfc:<code>2315</code>.</li>
<li>Added new :doc:<code>/hazmat/decrepit/index</code> module which contains outdated and
insecure cryptographic primitives.
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.CAST5</code>,
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.SEED</code>,
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.IDEA</code>, and
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.Blowfish</code>, which were
deprecated in 37.0.0, have been added to this module. They will be removed
from the <code>cipher</code> module in 45.0.0.</li>
<li>Moved :class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.TripleDES</code>
and :class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.ARC4</code> into
:doc:<code>/hazmat/decrepit/index</code> and deprecated them in the <code>cipher</code> module.
They will be removed from the <code>cipher</code> module in 48.0.0.</li>
<li>Added support for deterministic
:class:<code>~cryptography.hazmat.primitives.asymmetric.ec.ECDSA</code> (:rfc:<code>6979</code>)</li>
<li>Added support for client certificate verification to the
:mod:<code>X.509 path validation <cryptography.x509.verification></code> APIs in the
form of :class:<code>~cryptography.x509.verification.ClientVerifier</code>,
:class:<code>~cryptography.x509.verification.VerifiedClient</code>, and
<code>PolicyBuilder</code>
:meth:<code>~cryptography.x509.verification.PolicyBuilder.build_client_verifier</code>.</li>
<li>Added Certificate
:attr:<code>~cryptography.x509.Certificate.public_key_algorithm_oid</code>
and Certificate Signing Request
:attr:<code>~cryptography.x509.CertificateSigningRequest.public_key_algorithm_oid</code>
to determine the :class:<code>~cryptography.hazmat._oid.PublicKeyAlgorithmOID</code>
Object Identifier of the public key found inside the certificate.</li>
<li>Added :attr:<code>~cryptography.x509.InvalidityDate.invalidity_date_utc</code>, a
timezone-aware alternative to the naรฏve <code>datetime</code> attribute
:attr:<code>~cryptography.x509.InvalidityDate.invalidity_date</code>.</li>
<li>Added support for parsing empty DN string in</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/a7733878281ca261c4ada04022fc706ba5de9d8b"><code>a773387</code></a> bump for 43.0.1 (<a href="https://redirect.github.com/pyca/cryptography/issues/11533">#11533</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/0393fef5758e55e3c7b3a3e6e5b77821c594a87f"><code>0393fef</code></a> Backport setuptools version ban (<a href="https://redirect.github.com/pyca/cryptography/issues/11526">#11526</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/6687bab97aef31d6ee6cc94ecc87a972137b5d4a"><code>6687bab</code></a> Bump openssl from 0.10.65 to 0.10.66 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11320">#11320</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/11324">#11324</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ebf14f2edc8536f36797979cb0e075e766d978c5"><code>ebf14f2</code></a> bump for 43.0.0 and update changelog (<a href="https://redirect.github.com/pyca/cryptography/issues/11311">#11311</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/42788a0353e0ca0d922b6b8b9bde77cbb1c65984"><code>42788a0</code></a> Fix exchange with keys that had Q automatically computed (<a href="https://redirect.github.com/pyca/cryptography/issues/11309">#11309</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/2dbdfb8f3913cb9cef08218fcd48a9b4eaa8b57d"><code>2dbdfb8</code></a> don't assign unused name (<a href="https://redirect.github.com/pyca/cryptography/issues/11310">#11310</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ccc66e6cdf92f4c29012f86f44ad183161eccaad"><code>ccc66e6</code></a> Bump openssl from 0.10.64 to 0.10.65 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11308">#11308</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/4310c8727b50fa5f713a0e863ee3defc0c831921"><code>4310c87</code></a> Bump sphinxcontrib-qthelp from 1.0.7 to 1.0.8 (<a href="https://redirect.github.com/pyca/cryptography/issues/11307">#11307</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f66a9c4b4fe9b87825872fef7a36c319b823f322"><code>f66a9c4</code></a> Bump sphinxcontrib-htmlhelp from 2.0.5 to 2.0.6 (<a href="https://redirect.github.com/pyca/cryptography/issues/11306">#11306</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/a8fcf18ee0bb0570bd4c9041cf387dc7a9c1968a"><code>a8fcf18</code></a> Bump openssl-sys from 0.9.102 to 0.9.103 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11305">#11305</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/42.0.4...43.0.1">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=42.0.4&new-version=43.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/dataset-viewer/network/alerts).
</details> | Bump cryptography from 42.0.4 to 43.0.1 in /libs/libcommon in the pip group across 1 directory: [//]: # (dependabot-start)
โ ๏ธ **Dependabot is rebasing this PR** โ ๏ธ
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps the pip group with 1 update in the /libs/libcommon directory: [cryptography](https://github.com/pyca/cryptography).
Updates `cryptography` from 42.0.4 to 43.0.1
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>43.0.1 - 2024-09-03</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.3.2.
<p>.. _v43-0-0:</p>
<p>43.0.0 - 2024-07-20<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for OpenSSL less than 1.1.1e has been
removed. Users on older version of OpenSSL will need to upgrade.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.8.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.3.1.</li>
<li>Updated the minimum supported Rust version (MSRV) to 1.65.0, from 1.63.0.</li>
<li>:func:<code>~cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key</code>
now enforces a minimum RSA key size of 1024-bit. Note that 1024-bit is still
considered insecure, users should generally use a key size of 2048-bits.</li>
<li>:func:<code>~cryptography.hazmat.primitives.serialization.pkcs7.serialize_certificates</code>
now emits ASN.1 that more closely follows the recommendations in :rfc:<code>2315</code>.</li>
<li>Added new :doc:<code>/hazmat/decrepit/index</code> module which contains outdated and
insecure cryptographic primitives.
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.CAST5</code>,
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.SEED</code>,
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.IDEA</code>, and
:class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.Blowfish</code>, which were
deprecated in 37.0.0, have been added to this module. They will be removed
from the <code>cipher</code> module in 45.0.0.</li>
<li>Moved :class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.TripleDES</code>
and :class:<code>~cryptography.hazmat.primitives.ciphers.algorithms.ARC4</code> into
:doc:<code>/hazmat/decrepit/index</code> and deprecated them in the <code>cipher</code> module.
They will be removed from the <code>cipher</code> module in 48.0.0.</li>
<li>Added support for deterministic
:class:<code>~cryptography.hazmat.primitives.asymmetric.ec.ECDSA</code> (:rfc:<code>6979</code>)</li>
<li>Added support for client certificate verification to the
:mod:<code>X.509 path validation <cryptography.x509.verification></code> APIs in the
form of :class:<code>~cryptography.x509.verification.ClientVerifier</code>,
:class:<code>~cryptography.x509.verification.VerifiedClient</code>, and
<code>PolicyBuilder</code>
:meth:<code>~cryptography.x509.verification.PolicyBuilder.build_client_verifier</code>.</li>
<li>Added Certificate
:attr:<code>~cryptography.x509.Certificate.public_key_algorithm_oid</code>
and Certificate Signing Request
:attr:<code>~cryptography.x509.CertificateSigningRequest.public_key_algorithm_oid</code>
to determine the :class:<code>~cryptography.hazmat._oid.PublicKeyAlgorithmOID</code>
Object Identifier of the public key found inside the certificate.</li>
<li>Added :attr:<code>~cryptography.x509.InvalidityDate.invalidity_date_utc</code>, a
timezone-aware alternative to the naรฏve <code>datetime</code> attribute
:attr:<code>~cryptography.x509.InvalidityDate.invalidity_date</code>.</li>
<li>Added support for parsing empty DN string in</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/a7733878281ca261c4ada04022fc706ba5de9d8b"><code>a773387</code></a> bump for 43.0.1 (<a href="https://redirect.github.com/pyca/cryptography/issues/11533">#11533</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/0393fef5758e55e3c7b3a3e6e5b77821c594a87f"><code>0393fef</code></a> Backport setuptools version ban (<a href="https://redirect.github.com/pyca/cryptography/issues/11526">#11526</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/6687bab97aef31d6ee6cc94ecc87a972137b5d4a"><code>6687bab</code></a> Bump openssl from 0.10.65 to 0.10.66 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11320">#11320</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/11324">#11324</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ebf14f2edc8536f36797979cb0e075e766d978c5"><code>ebf14f2</code></a> bump for 43.0.0 and update changelog (<a href="https://redirect.github.com/pyca/cryptography/issues/11311">#11311</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/42788a0353e0ca0d922b6b8b9bde77cbb1c65984"><code>42788a0</code></a> Fix exchange with keys that had Q automatically computed (<a href="https://redirect.github.com/pyca/cryptography/issues/11309">#11309</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/2dbdfb8f3913cb9cef08218fcd48a9b4eaa8b57d"><code>2dbdfb8</code></a> don't assign unused name (<a href="https://redirect.github.com/pyca/cryptography/issues/11310">#11310</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/ccc66e6cdf92f4c29012f86f44ad183161eccaad"><code>ccc66e6</code></a> Bump openssl from 0.10.64 to 0.10.65 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11308">#11308</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/4310c8727b50fa5f713a0e863ee3defc0c831921"><code>4310c87</code></a> Bump sphinxcontrib-qthelp from 1.0.7 to 1.0.8 (<a href="https://redirect.github.com/pyca/cryptography/issues/11307">#11307</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f66a9c4b4fe9b87825872fef7a36c319b823f322"><code>f66a9c4</code></a> Bump sphinxcontrib-htmlhelp from 2.0.5 to 2.0.6 (<a href="https://redirect.github.com/pyca/cryptography/issues/11306">#11306</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/a8fcf18ee0bb0570bd4c9041cf387dc7a9c1968a"><code>a8fcf18</code></a> Bump openssl-sys from 0.9.102 to 0.9.103 in /src/rust (<a href="https://redirect.github.com/pyca/cryptography/issues/11305">#11305</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/42.0.4...43.0.1">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=42.0.4&new-version=43.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/dataset-viewer/network/alerts).
</details> | closed | 2024-09-04T00:08:11Z | 2024-09-20T17:59:55Z | 2024-09-20T17:59:47Z | dependabot[bot] |
2,502,373,594 | The stale bot is broken (403 forbidden from GitHub) | See https://github.com/huggingface/dataset-viewer/actions/runs/10646207687
> github.GithubException.GithubException: 403 {"message": "Resource not accessible by integration", "documentation_url": "https://docs.github.com/rest/issues/issues#update-an-issue", "status": "403"}
Some options:
- fix the issue (seems to be an issue with a token)
- delete the stale bot
- replace this custom stale bot with something more standard | The stale bot is broken (403 forbidden from GitHub): See https://github.com/huggingface/dataset-viewer/actions/runs/10646207687
> github.GithubException.GithubException: 403 {"message": "Resource not accessible by integration", "documentation_url": "https://docs.github.com/rest/issues/issues#update-an-issue", "status": "403"}
Some options:
- fix the issue (seems to be an issue with a token)
- delete the stale bot
- replace this custom stale bot with something more standard | open | 2024-09-03T09:27:57Z | 2024-09-03T09:28:12Z | null | severo |
2,499,053,306 | Improve estimated row count | Currently estimated row count assumes file sizes are consistent across the entire set, from what I've seen this results in wildly inaccurate estimates for WebDataset. Typically WebDatasets are created with a set number of samples per file, therefore a simpler more accurate estimate can be calculated from the row count of one shard multiplied by the total number of shards. | Improve estimated row count: Currently estimated row count assumes file sizes are consistent across the entire set, from what I've seen this results in wildly inaccurate estimates for WebDataset. Typically WebDatasets are created with a set number of samples per file, therefore a simpler more accurate estimate can be calculated from the row count of one shard multiplied by the total number of shards. | open | 2024-08-31T21:28:52Z | 2024-09-04T09:45:10Z | null | hlky |
2,495,811,839 | Image URL detection | [`is_image_url`](https://github.com/huggingface/dataset-viewer/blob/946b0788fa426007161f2077a70b5ae64b211cf8/libs/libcommon/src/libcommon/utils.py#L131-L134) relies on a filename and extension being present, however, in some cases an image URL does not contain a filename. Example [dataset](https://huggingface.co/datasets/bigdata-pw/SteamScreenshots) and example [URL](https://steamuserimages-a.akamaihd.net/ugc/910172100453203507/062F4787060B2E4E93EFC4631E96183B027A860B/). This could be improved by checking the `content-type` header of the response or checking for strings like "image" in the URL. | Image URL detection: [`is_image_url`](https://github.com/huggingface/dataset-viewer/blob/946b0788fa426007161f2077a70b5ae64b211cf8/libs/libcommon/src/libcommon/utils.py#L131-L134) relies on a filename and extension being present, however, in some cases an image URL does not contain a filename. Example [dataset](https://huggingface.co/datasets/bigdata-pw/SteamScreenshots) and example [URL](https://steamuserimages-a.akamaihd.net/ugc/910172100453203507/062F4787060B2E4E93EFC4631E96183B027A860B/). This could be improved by checking the `content-type` header of the response or checking for strings like "image" in the URL. | open | 2024-08-29T23:17:55Z | 2024-09-03T09:17:27Z | null | hlky |
2,492,135,837 | Investigate the most common errors | All the entries in the cache that have a cause exception, sorted by number of occurrences:
| error_code | cause_exception | count |
| --- | --- | --- |
| EmptyDatasetError | EmptyDatasetError | 17886 |
| DataFilesNotFoundError | DataFilesNotFoundError | 9233 |
| ComputationError | StatisticsComputationError | 6534 |
| ComputationError | TypeError | 3063 |
| DatasetGenerationCastError | DatasetGenerationCastError | 2403 |
| PreviousStepStillProcessingError | CachedArtifactNotFoundError | 2160 |
| DatasetGenerationError | ArrowInvalid | 2127 |
| FeaturesError | ArrowInvalid | 1543 |
| InfoError | HfHubHTTPError | 1354 |
| DatasetGenerationError | UnicodeDecodeError | 1211 |
| DatasetGenerationError | TypeError | 1176 |
| UnexpectedError | TypeError | 1033 |
| FeaturesError | ValueError | 969 |
| DatasetGenerationError | ArrowNotImplementedError | 943 |
| UnexpectedError | HfHubHTTPError | 896 |
| FeaturesError | UnicodeDecodeError | 868 |
| DatasetGenerationError | ValueError | 867 |
| SplitNamesFromStreamingError | SplitsNotFoundError | 844 |
| UnexpectedError | ValueError | 822 |
| InfoError | BrokenPipeError | 817 |
| InfoError | SplitsNotFoundError | 708 |
| ConfigNamesError | ImportError | 651 |
| UnexpectedError | ParserException | 621 |
| UnexpectedError | ReadTimeout | 463 |
| UnexpectedError | BinderException | 462 |
| DatasetGenerationError | ParserError | 445 |
| FeaturesError | ParserError | 436 |
| ComputationError | ZeroDivisionError | 421 |
| DatasetGenerationError | SchemaInferenceError | 387 |
| UnexpectedError | FileNotFoundError | 343 |
| ConfigNamesError | ValueError | 312 |
| PolarsParquetReadError | FileNotFoundError | 273 |
| FeaturesError | ZstdError | 266 |
| StreamingRowsError | ValueError | 266 |
| FileFormatMismatchBetweenSplitsError | ValueError | 221 |
| UnexpectedError | RuntimeError | 207 |
| UnexpectedError | PermissionError | 202 |
| UnexpectedError | UnidentifiedImageError | 195 |
| UnexpectedError | BadZipFile | 192 |
| PolarsParquetReadError | ComputeError | 180 |
| DatasetGenerationError | FileNotFoundError | 174 |
| RowsPostProcessingError | ValueError | 168 |
| ComputationError | UnidentifiedImageError | 166 |
| UnexpectedError | UnicodeDecodeError | 163 |
| UnexpectedError | EntryNotFoundError | 162 |
| FeaturesError | ArrowTypeError | 134 |
| UnexpectedError | ConnectionError | 133 |
| StreamingRowsError | CastError | 132 |
| DatasetGenerationError | KeyError | 131 |
| ComputationError | SchemaError | 124 |
| ConfigNamesError | BadZipFile | 111 |
| DatasetGenerationError | ArrowTypeError | 108 |
| UnexpectedError | ArrowInvalid | 104 |
| DatasetGenerationError | CastError | 104 |
| RowsPostProcessingError | KeyError | 103 |
| StreamingRowsError | OSError | 94 |
| UnexpectedError | ColumnNotFoundError | 93 |
| StreamingRowsError | RuntimeError | 89 |
| RowsPostProcessingError | UnidentifiedImageError | 86 |
| UnexpectedError | SchemaError | 83 |
| ConfigNamesError | FileNotFoundError | 82 |
| StreamingRowsError | ArrowInvalid | 80 |
| UnexpectedError | ReadError | 79 |
| StreamingRowsError | KeyError | 78 |
| UnexpectedError | ClientResponseError | 77 |
| UnexpectedError | ComputeError | 74 |
| InfoError | DatasetWithScriptNotSupportedError | 70 |
| RowsPostProcessingError | TypeError | 68 |
| InfoError | DatasetNotFoundError | 67 |
| CreateCommitError | RepositoryNotFoundError | 65 |
| FeaturesError | EmptyDataError | 65 |
| UnexpectedError | InvalidInputException | 58 |
| UnexpectedError | ServerDisconnectedError | 54 |
| ComputationError | ValueError | 54 |
| UnexpectedError | IOException | 52 |
| UnexpectedError | ConversionException | 50 |
| ConfigNamesError | TypeError | 47 |
| InfoError | ReadTimeout | 42 |
| DatasetGenerationError | EmptyDataError | 42 |
| StreamingRowsError | TypeError | 40 |
| UnexpectedError | KeyError | 38 |
| PolarsParquetReadError | ColumnNotFoundError | 38 |
| DatasetGenerationError | ConnectionError | 38 |
| DatasetGenerationError | BadZipFile | 37 |
| StreamingRowsError | UnicodeDecodeError | 36 |
| UnexpectedError | NonMatchingSplitsSizesError | 36 |
| UnexpectedError | OSError | 34 |
| DatasetGenerationError | ArrowCapacityError | 33 |
| UnexpectedError | ArrowTypeError | 33 |
| SplitNamesFromStreamingError | FileNotFoundError | 33 |
| DatasetGenerationError | GatedRepoError | 33 |
| InfoError | ConnectionError | 31 |
| RowsPostProcessingError | CouldntDecodeError | 30 |
| DatasetGenerationError | OverflowError | 26 |
| StreamingRowsError | UnidentifiedImageError | 25 |
| FeaturesError | OverflowError | 22 |
| StreamingRowsError | LibsndfileError | 21 |
| UnexpectedError | ArrowCapacityError | 20 |
| UnexpectedError | DecompressionBombError | 20 |
| StreamingRowsError | FileNotFoundError | 19 |
| UnexpectedError | NotImplementedError | 18 |
| RowsPostProcessingError | OSError | 18 |
| UnexpectedError | ZeroDivisionError | 18 |
| ComputationError | DecompressionBombError | 18 |
| NormalRowsError | DatasetGenerationError | 18 |
| DatasetGenerationError | ReadError | 17 |
| UnexpectedError | Error | 17 |
| FeaturesError | RuntimeError | 16 |
| UnexpectedError | DatasetGenerationError | 15 |
| UnexpectedError | IndexError | 14 |
| DatasetGenerationError | RuntimeError | 13 |
| UnexpectedError | JSONDecodeError | 13 |
| UnexpectedError | ParserError | 12 |
| DatasetGenerationError | NotImplementedError | 12 |
| ComputationError | ArrowInvalid | 12 |
| StreamingRowsError | NotImplementedError | 11 |
| StreamingRowsError | ParserError | 11 |
| ConfigNamesError | InvalidConfigName | 11 |
| DatasetGenerationError | ArrowIndexError | 11 |
| ComputationError | DuplicateError | 11 |
| StreamingRowsError | ArrowNotImplementedError | 10 |
| FeaturesError | HfHubHTTPError | 10 |
| FeaturesError | AttributeError | 9 |
| CreateCommitError | BadRequestError | 8 |
| StreamingRowsError | HfHubHTTPError | 8 |
| NormalRowsError | FileNotFoundError | 8 |
| FeaturesError | UnsupportedOperation | 8 |
| UnexpectedError | FSTimeoutError | 8 |
| ConfigNamesError | AttributeError | 8 |
| UnexpectedError | AttributeError | 8 |
| DatasetGenerationError | OSError | 7 |
| FeaturesError | ArrowCapacityError | 7 |
| DatasetGenerationError | AttributeError | 7 |
| ConfigNamesError | ReadTimeout | 7 |
| CreateCommitError | EntryNotFoundError | 6 |
| FeaturesError | BadGzipFile | 6 |
| FeaturesError | BadZipFile | 6 |
| NormalRowsError | DatasetGenerationCastError | 6 |
| ComputationError | InvalidOperationError | 6 |
| ComputationError | OverflowError | 5 |
| UnexpectedError | HTTPError | 5 |
| InfoError | ValueError | 5 |
| DatasetGenerationError | EOFError | 4 |
| InfoError | FileNotFoundError | 4 |
| FeaturesError | HTTPError | 4 |
| UnexpectedError | TypeMismatchException | 4 |
| NormalRowsError | HfHubHTTPError | 4 |
| UnexpectedError | DatasetGenerationCastError | 4 |
| UnexpectedError | InvalidOperationError | 4 |
| UnexpectedError | ClientConnectorError | 4 |
| StreamingRowsError | ReadError | 3 |
| ConfigNamesError | UnicodeDecodeError | 3 |
| FeaturesError | NotImplementedError | 3 |
| PolarsParquetReadError | error | 3 |
| NormalRowsError | OSError | 3 |
| UnexpectedError | ExpectedMoreSplits | 3 |
| FeaturesError | FileNotFoundError | 3 |
| DatasetGenerationError | HfHubHTTPError | 3 |
| UnexpectedError | error | 3 |
| UnexpectedError | UnpicklingError | 3 |
| StreamingRowsError | AssertionError | 3 |
| StreamingRowsError | EmptyDataError | 3 |
| StreamingRowsError | EntryNotFoundError | 2 |
| DatasetGenerationError | UnsupportedOperation | 2 |
| UnexpectedError | InternalException | 2 |
| DatasetGenerationError | error | 2 |
| ConfigNamesError | ScannerError | 2 |
| StreamingRowsError | ArrowCapacityError | 2 |
| RetryableConfigNamesError | HfHubHTTPError | 2 |
| StreamingRowsError | DecompressionBombError | 2 |
| SplitNamesFromStreamingError | HfHubHTTPError | 2 |
| ConfigNamesError | JSONDecodeError | 2 |
| ConfigNamesError | KeyError | 2 |
| InfoError | DataFilesNotFoundError | 2 |
| DatasetGenerationError | EntryNotFoundError | 2 |
| InfoError | BadZipFile | 2 |
| UnexpectedError | IsADirectoryError | 2 |
| UnexpectedError | DuplicateError | 2 |
| FeaturesError | KeyError | 1 |
| RowsPostProcessingError | SyntaxError | 1 |
| CreateCommitError | HfHubHTTPError | 1 |
| FeaturesError | ConnectionError | 1 |
| StreamingRowsError | error | 1 |
| StreamingRowsError | OverflowError | 1 |
| StreamingRowsError | HTTPError | 1 |
| StreamingRowsError | ArrowTypeError | 1 |
| StreamingRowsError | AttributeError | 1 |
| UnexpectedError | ChunkedEncodingError | 1 |
| UnexpectedError | EmptyDatasetError | 1 |
| RowsPostProcessingError | LibsndfileError | 1 |
| UnexpectedError | TransactionException | 1 |
| UnexpectedError | EOFError | 1 |
| SplitNamesFromStreamingError | ConnectionError | 1 |
| InfoError | JSONDecodeError | 1 |
| UnexpectedError | ClientPayloadError | 1 |
| FeaturesError | ReadTimeout | 1 |
| UnexpectedError | EmptyDataError | 1 |
| ConfigNamesError | IsADirectoryError | 1 |
| DatasetGenerationError | AssertionError | 1 |
```js
db.cachedResponsesBlue.aggregate([
{
$match: {
"details.copied_from_artifact": {"$exists": false},
"details.cause_exception": {"$exists": true},
},
},
{
$group: {
_id: {
error_code: "$error_code",
cause_exception: "$details.cause_exception",
},
count: {
$sum: 1
},
},
}, {
$sort: { count: -1 }
}
]);
```
| Investigate the most common errors: All the entries in the cache that have a cause exception, sorted by number of occurrences:
| error_code | cause_exception | count |
| --- | --- | --- |
| EmptyDatasetError | EmptyDatasetError | 17886 |
| DataFilesNotFoundError | DataFilesNotFoundError | 9233 |
| ComputationError | StatisticsComputationError | 6534 |
| ComputationError | TypeError | 3063 |
| DatasetGenerationCastError | DatasetGenerationCastError | 2403 |
| PreviousStepStillProcessingError | CachedArtifactNotFoundError | 2160 |
| DatasetGenerationError | ArrowInvalid | 2127 |
| FeaturesError | ArrowInvalid | 1543 |
| InfoError | HfHubHTTPError | 1354 |
| DatasetGenerationError | UnicodeDecodeError | 1211 |
| DatasetGenerationError | TypeError | 1176 |
| UnexpectedError | TypeError | 1033 |
| FeaturesError | ValueError | 969 |
| DatasetGenerationError | ArrowNotImplementedError | 943 |
| UnexpectedError | HfHubHTTPError | 896 |
| FeaturesError | UnicodeDecodeError | 868 |
| DatasetGenerationError | ValueError | 867 |
| SplitNamesFromStreamingError | SplitsNotFoundError | 844 |
| UnexpectedError | ValueError | 822 |
| InfoError | BrokenPipeError | 817 |
| InfoError | SplitsNotFoundError | 708 |
| ConfigNamesError | ImportError | 651 |
| UnexpectedError | ParserException | 621 |
| UnexpectedError | ReadTimeout | 463 |
| UnexpectedError | BinderException | 462 |
| DatasetGenerationError | ParserError | 445 |
| FeaturesError | ParserError | 436 |
| ComputationError | ZeroDivisionError | 421 |
| DatasetGenerationError | SchemaInferenceError | 387 |
| UnexpectedError | FileNotFoundError | 343 |
| ConfigNamesError | ValueError | 312 |
| PolarsParquetReadError | FileNotFoundError | 273 |
| FeaturesError | ZstdError | 266 |
| StreamingRowsError | ValueError | 266 |
| FileFormatMismatchBetweenSplitsError | ValueError | 221 |
| UnexpectedError | RuntimeError | 207 |
| UnexpectedError | PermissionError | 202 |
| UnexpectedError | UnidentifiedImageError | 195 |
| UnexpectedError | BadZipFile | 192 |
| PolarsParquetReadError | ComputeError | 180 |
| DatasetGenerationError | FileNotFoundError | 174 |
| RowsPostProcessingError | ValueError | 168 |
| ComputationError | UnidentifiedImageError | 166 |
| UnexpectedError | UnicodeDecodeError | 163 |
| UnexpectedError | EntryNotFoundError | 162 |
| FeaturesError | ArrowTypeError | 134 |
| UnexpectedError | ConnectionError | 133 |
| StreamingRowsError | CastError | 132 |
| DatasetGenerationError | KeyError | 131 |
| ComputationError | SchemaError | 124 |
| ConfigNamesError | BadZipFile | 111 |
| DatasetGenerationError | ArrowTypeError | 108 |
| UnexpectedError | ArrowInvalid | 104 |
| DatasetGenerationError | CastError | 104 |
| RowsPostProcessingError | KeyError | 103 |
| StreamingRowsError | OSError | 94 |
| UnexpectedError | ColumnNotFoundError | 93 |
| StreamingRowsError | RuntimeError | 89 |
| RowsPostProcessingError | UnidentifiedImageError | 86 |
| UnexpectedError | SchemaError | 83 |
| ConfigNamesError | FileNotFoundError | 82 |
| StreamingRowsError | ArrowInvalid | 80 |
| UnexpectedError | ReadError | 79 |
| StreamingRowsError | KeyError | 78 |
| UnexpectedError | ClientResponseError | 77 |
| UnexpectedError | ComputeError | 74 |
| InfoError | DatasetWithScriptNotSupportedError | 70 |
| RowsPostProcessingError | TypeError | 68 |
| InfoError | DatasetNotFoundError | 67 |
| CreateCommitError | RepositoryNotFoundError | 65 |
| FeaturesError | EmptyDataError | 65 |
| UnexpectedError | InvalidInputException | 58 |
| UnexpectedError | ServerDisconnectedError | 54 |
| ComputationError | ValueError | 54 |
| UnexpectedError | IOException | 52 |
| UnexpectedError | ConversionException | 50 |
| ConfigNamesError | TypeError | 47 |
| InfoError | ReadTimeout | 42 |
| DatasetGenerationError | EmptyDataError | 42 |
| StreamingRowsError | TypeError | 40 |
| UnexpectedError | KeyError | 38 |
| PolarsParquetReadError | ColumnNotFoundError | 38 |
| DatasetGenerationError | ConnectionError | 38 |
| DatasetGenerationError | BadZipFile | 37 |
| StreamingRowsError | UnicodeDecodeError | 36 |
| UnexpectedError | NonMatchingSplitsSizesError | 36 |
| UnexpectedError | OSError | 34 |
| DatasetGenerationError | ArrowCapacityError | 33 |
| UnexpectedError | ArrowTypeError | 33 |
| SplitNamesFromStreamingError | FileNotFoundError | 33 |
| DatasetGenerationError | GatedRepoError | 33 |
| InfoError | ConnectionError | 31 |
| RowsPostProcessingError | CouldntDecodeError | 30 |
| DatasetGenerationError | OverflowError | 26 |
| StreamingRowsError | UnidentifiedImageError | 25 |
| FeaturesError | OverflowError | 22 |
| StreamingRowsError | LibsndfileError | 21 |
| UnexpectedError | ArrowCapacityError | 20 |
| UnexpectedError | DecompressionBombError | 20 |
| StreamingRowsError | FileNotFoundError | 19 |
| UnexpectedError | NotImplementedError | 18 |
| RowsPostProcessingError | OSError | 18 |
| UnexpectedError | ZeroDivisionError | 18 |
| ComputationError | DecompressionBombError | 18 |
| NormalRowsError | DatasetGenerationError | 18 |
| DatasetGenerationError | ReadError | 17 |
| UnexpectedError | Error | 17 |
| FeaturesError | RuntimeError | 16 |
| UnexpectedError | DatasetGenerationError | 15 |
| UnexpectedError | IndexError | 14 |
| DatasetGenerationError | RuntimeError | 13 |
| UnexpectedError | JSONDecodeError | 13 |
| UnexpectedError | ParserError | 12 |
| DatasetGenerationError | NotImplementedError | 12 |
| ComputationError | ArrowInvalid | 12 |
| StreamingRowsError | NotImplementedError | 11 |
| StreamingRowsError | ParserError | 11 |
| ConfigNamesError | InvalidConfigName | 11 |
| DatasetGenerationError | ArrowIndexError | 11 |
| ComputationError | DuplicateError | 11 |
| StreamingRowsError | ArrowNotImplementedError | 10 |
| FeaturesError | HfHubHTTPError | 10 |
| FeaturesError | AttributeError | 9 |
| CreateCommitError | BadRequestError | 8 |
| StreamingRowsError | HfHubHTTPError | 8 |
| NormalRowsError | FileNotFoundError | 8 |
| FeaturesError | UnsupportedOperation | 8 |
| UnexpectedError | FSTimeoutError | 8 |
| ConfigNamesError | AttributeError | 8 |
| UnexpectedError | AttributeError | 8 |
| DatasetGenerationError | OSError | 7 |
| FeaturesError | ArrowCapacityError | 7 |
| DatasetGenerationError | AttributeError | 7 |
| ConfigNamesError | ReadTimeout | 7 |
| CreateCommitError | EntryNotFoundError | 6 |
| FeaturesError | BadGzipFile | 6 |
| FeaturesError | BadZipFile | 6 |
| NormalRowsError | DatasetGenerationCastError | 6 |
| ComputationError | InvalidOperationError | 6 |
| ComputationError | OverflowError | 5 |
| UnexpectedError | HTTPError | 5 |
| InfoError | ValueError | 5 |
| DatasetGenerationError | EOFError | 4 |
| InfoError | FileNotFoundError | 4 |
| FeaturesError | HTTPError | 4 |
| UnexpectedError | TypeMismatchException | 4 |
| NormalRowsError | HfHubHTTPError | 4 |
| UnexpectedError | DatasetGenerationCastError | 4 |
| UnexpectedError | InvalidOperationError | 4 |
| UnexpectedError | ClientConnectorError | 4 |
| StreamingRowsError | ReadError | 3 |
| ConfigNamesError | UnicodeDecodeError | 3 |
| FeaturesError | NotImplementedError | 3 |
| PolarsParquetReadError | error | 3 |
| NormalRowsError | OSError | 3 |
| UnexpectedError | ExpectedMoreSplits | 3 |
| FeaturesError | FileNotFoundError | 3 |
| DatasetGenerationError | HfHubHTTPError | 3 |
| UnexpectedError | error | 3 |
| UnexpectedError | UnpicklingError | 3 |
| StreamingRowsError | AssertionError | 3 |
| StreamingRowsError | EmptyDataError | 3 |
| StreamingRowsError | EntryNotFoundError | 2 |
| DatasetGenerationError | UnsupportedOperation | 2 |
| UnexpectedError | InternalException | 2 |
| DatasetGenerationError | error | 2 |
| ConfigNamesError | ScannerError | 2 |
| StreamingRowsError | ArrowCapacityError | 2 |
| RetryableConfigNamesError | HfHubHTTPError | 2 |
| StreamingRowsError | DecompressionBombError | 2 |
| SplitNamesFromStreamingError | HfHubHTTPError | 2 |
| ConfigNamesError | JSONDecodeError | 2 |
| ConfigNamesError | KeyError | 2 |
| InfoError | DataFilesNotFoundError | 2 |
| DatasetGenerationError | EntryNotFoundError | 2 |
| InfoError | BadZipFile | 2 |
| UnexpectedError | IsADirectoryError | 2 |
| UnexpectedError | DuplicateError | 2 |
| FeaturesError | KeyError | 1 |
| RowsPostProcessingError | SyntaxError | 1 |
| CreateCommitError | HfHubHTTPError | 1 |
| FeaturesError | ConnectionError | 1 |
| StreamingRowsError | error | 1 |
| StreamingRowsError | OverflowError | 1 |
| StreamingRowsError | HTTPError | 1 |
| StreamingRowsError | ArrowTypeError | 1 |
| StreamingRowsError | AttributeError | 1 |
| UnexpectedError | ChunkedEncodingError | 1 |
| UnexpectedError | EmptyDatasetError | 1 |
| RowsPostProcessingError | LibsndfileError | 1 |
| UnexpectedError | TransactionException | 1 |
| UnexpectedError | EOFError | 1 |
| SplitNamesFromStreamingError | ConnectionError | 1 |
| InfoError | JSONDecodeError | 1 |
| UnexpectedError | ClientPayloadError | 1 |
| FeaturesError | ReadTimeout | 1 |
| UnexpectedError | EmptyDataError | 1 |
| ConfigNamesError | IsADirectoryError | 1 |
| DatasetGenerationError | AssertionError | 1 |
```js
db.cachedResponsesBlue.aggregate([
{
$match: {
"details.copied_from_artifact": {"$exists": false},
"details.cause_exception": {"$exists": true},
},
},
{
$group: {
_id: {
error_code: "$error_code",
cause_exception: "$details.cause_exception",
},
count: {
$sum: 1
},
},
}, {
$sort: { count: -1 }
}
]);
```
| open | 2024-08-28T13:48:06Z | 2024-08-28T13:54:09Z | null | severo |
2,491,704,042 | Don't try to include binary cells in the /rows responses | https://huggingface.co/datasets/frutiemax/themoviedb_posters/discussions/2
> I have some columns that the viewer do not need to load i.e. T5 prompt embeds and VAE features. Currently, the viewer freezes because the dataset is too big to load. Thanks.
<img width="1019" alt="Capture dโeฬcran 2024-08-28 aฬ 12 31 40" src="https://github.com/user-attachments/assets/62684493-2980-4658-aef7-33ca6139af70">
https://huggingface.co/datasets/frutiemax/themoviedb_posters/viewer/default/train lags forever to render the following:
<img width="912" alt="Capture dโeฬcran 2024-08-28 aฬ 12 32 50" src="https://github.com/user-attachments/assets/00abc6a0-300b-4305-b803-242fc59bd42f">
We should ignore/hide the binary cells (`bytes (5KB)` for example) | Don't try to include binary cells in the /rows responses: https://huggingface.co/datasets/frutiemax/themoviedb_posters/discussions/2
> I have some columns that the viewer do not need to load i.e. T5 prompt embeds and VAE features. Currently, the viewer freezes because the dataset is too big to load. Thanks.
<img width="1019" alt="Capture dโeฬcran 2024-08-28 aฬ 12 31 40" src="https://github.com/user-attachments/assets/62684493-2980-4658-aef7-33ca6139af70">
https://huggingface.co/datasets/frutiemax/themoviedb_posters/viewer/default/train lags forever to render the following:
<img width="912" alt="Capture dโeฬcran 2024-08-28 aฬ 12 32 50" src="https://github.com/user-attachments/assets/00abc6a0-300b-4305-b803-242fc59bd42f">
We should ignore/hide the binary cells (`bytes (5KB)` for example) | open | 2024-08-28T10:33:37Z | 2024-08-28T10:34:22Z | null | severo |
2,490,707,558 | Fix typo "build > built" | null | Fix typo "build > built": | closed | 2024-08-28T01:55:19Z | 2024-08-28T09:33:50Z | 2024-08-28T09:33:50Z | david4096 |
2,489,615,941 | remove obsolete retryable error codes | we don't have these error codes in the database anymore | remove obsolete retryable error codes: we don't have these error codes in the database anymore | closed | 2024-08-27T14:50:53Z | 2024-08-27T14:51:06Z | 2024-08-27T14:51:04Z | severo |
2,486,482,261 | no need to keep the trace of the first exception | fixes #3048
| no need to keep the trace of the first exception: fixes #3048
| closed | 2024-08-26T10:08:39Z | 2024-08-26T10:22:50Z | 2024-08-26T10:22:49Z | severo |
2,486,453,418 | Hide `CachedArtifactError` to users | For example, currently on https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions
```
Cannot load the dataset split (in streaming mode) to extract the first rows.
```
```
Error code: StreamingRowsError
Exception: KeyError
Message: 'jpeg'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
compute_first_rows_from_parquet_response(
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
rows_index = indexer.get_rows_index(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index
return RowsIndex(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__
self.parquet_index = self._init_parquet_index(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index
response = get_previous_step_or_raise(
File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise
raise CachedArtifactError(
libcommon.simple_cache.CachedArtifactError: The previous step failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 96, in get_rows_or_raise
return get_rows(
File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 73, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1816, in __iter__
for key, example in ex_iterable:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 238, in __iter__
for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 115, in _generate_examples
example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
KeyError: 'jpeg'
``` | Hide `CachedArtifactError` to users: For example, currently on https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions
```
Cannot load the dataset split (in streaming mode) to extract the first rows.
```
```
Error code: StreamingRowsError
Exception: KeyError
Message: 'jpeg'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
compute_first_rows_from_parquet_response(
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
rows_index = indexer.get_rows_index(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index
return RowsIndex(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__
self.parquet_index = self._init_parquet_index(
File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index
response = get_previous_step_or_raise(
File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise
raise CachedArtifactError(
libcommon.simple_cache.CachedArtifactError: The previous step failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 96, in get_rows_or_raise
return get_rows(
File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 73, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1816, in __iter__
for key, example in ex_iterable:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 238, in __iter__
for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 115, in _generate_examples
example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
KeyError: 'jpeg'
``` | closed | 2024-08-26T09:55:24Z | 2024-08-26T12:21:04Z | 2024-08-26T10:22:49Z | severo |
2,483,908,267 | call pre_compute() and post_compute() in tests | see https://github.com/huggingface/dataset-viewer/pull/3046#issuecomment-2307793018 | call pre_compute() and post_compute() in tests: see https://github.com/huggingface/dataset-viewer/pull/3046#issuecomment-2307793018 | closed | 2024-08-23T21:15:20Z | 2024-08-23T21:25:08Z | 2024-08-23T21:25:07Z | severo |
2,483,501,949 | remove support for script-based datasets | fixes #3004 | remove support for script-based datasets: fixes #3004 | closed | 2024-08-23T16:44:06Z | 2024-08-26T08:10:06Z | 2024-08-26T08:10:04Z | severo |
2,483,347,395 | mnist and zalando are now data-only | see #3004 | mnist and zalando are now data-only: see #3004 | closed | 2024-08-23T15:06:54Z | 2024-08-23T15:07:24Z | 2024-08-23T15:07:23Z | severo |
2,482,760,646 | Remove temporary retryable error codes | null | Remove temporary retryable error codes: | closed | 2024-08-23T09:48:03Z | 2024-08-23T09:48:12Z | 2024-08-23T09:48:11Z | severo |
2,482,714,622 | Update values.yaml | remove obsolete datasets glob | Update values.yaml: remove obsolete datasets glob | closed | 2024-08-23T09:24:20Z | 2024-08-23T09:24:38Z | 2024-08-23T09:24:37Z | severo |
2,480,869,744 | show the traceback of the cause, it's clearer | Current:
https://huggingface.co/datasets/Recag/Rp_CommonC_53
```
The dataset generation failed
Error code: DatasetGenerationError
Exception: DatasetGenerationError
Message: An error occurred while generating the dataset
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 8779)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 128
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
With this PR:
```
The dataset generation failed
Error code: DatasetGenerationError
Exception: pyarrow.lib.ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 128
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 8779)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 128
``` | show the traceback of the cause, it's clearer: Current:
https://huggingface.co/datasets/Recag/Rp_CommonC_53
```
The dataset generation failed
Error code: DatasetGenerationError
Exception: DatasetGenerationError
Message: An error occurred while generating the dataset
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 8779)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 128
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
With this PR:
```
The dataset generation failed
Error code: DatasetGenerationError
Exception: pyarrow.lib.ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 128
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 8779)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 128
``` | closed | 2024-08-22T13:59:04Z | 2024-08-23T09:47:14Z | 2024-08-22T14:38:12Z | severo |
2,480,735,834 | Raise specific errors instead of ConfigNamesError when appropriate | It will help show them better in moonlanding
see #3010 | Raise specific errors instead of ConfigNamesError when appropriate: It will help show them better in moonlanding
see #3010 | closed | 2024-08-22T13:01:17Z | 2024-08-22T14:34:52Z | 2024-08-22T14:34:50Z | severo |
2,480,397,730 | Update datasets to 2.20.0 | Update datasets to 2.20.0.
This PR is intended to address the CI errors raised by this update, as a first step before:
- #3037
Fixes after the update of `datasets`:
- Pass `trust_remote_code=True` for script dataset: 227f2c4
- Use JSON-Lines (instead of JSON) dataset in `test_statistics_endpoint` to avoid pandas bug that downcasts float to int column: a86d040 | Update datasets to 2.20.0: Update datasets to 2.20.0.
This PR is intended to address the CI errors raised by this update, as a first step before:
- #3037
Fixes after the update of `datasets`:
- Pass `trust_remote_code=True` for script dataset: 227f2c4
- Use JSON-Lines (instead of JSON) dataset in `test_statistics_endpoint` to avoid pandas bug that downcasts float to int column: a86d040 | closed | 2024-08-22T10:16:02Z | 2024-08-23T05:05:45Z | 2024-08-23T05:05:43Z | albertvillanova |
2,480,259,294 | fix the lock release when finishing a job | fixes #1888 | fix the lock release when finishing a job: fixes #1888 | closed | 2024-08-22T09:16:32Z | 2024-08-22T09:48:28Z | 2024-08-22T09:48:27Z | severo |
2,480,119,650 | Update huggingface-hub to 0.24.6 | Update huggingface-hub to 0.24.6.
Note that the fix:
- #2781
- https://github.com/huggingface/huggingface_hub/pull/2271
was released in : https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.1
Related to:
- #3037 | Update huggingface-hub to 0.24.6: Update huggingface-hub to 0.24.6.
Note that the fix:
- #2781
- https://github.com/huggingface/huggingface_hub/pull/2271
was released in : https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.1
Related to:
- #3037 | closed | 2024-08-22T08:07:47Z | 2024-08-22T08:28:47Z | 2024-08-22T08:28:45Z | albertvillanova |
2,479,891,224 | Update datasets to 2.21.0 | Update datasets to 2.21.0.
Fix #3024. | Update datasets to 2.21.0: Update datasets to 2.21.0.
Fix #3024. | closed | 2024-08-22T05:45:39Z | 2024-09-26T14:44:57Z | 2024-09-26T14:44:54Z | albertvillanova |
2,479,535,819 | More explicit test and comments about offset-naive datetimes read from mongo | I think it's enough to close #862 | More explicit test and comments about offset-naive datetimes read from mongo: I think it's enough to close #862 | closed | 2024-08-22T01:31:14Z | 2024-08-22T09:49:11Z | 2024-08-22T09:49:10Z | severo |
2,479,450,462 | return 404 when accessing a renamed dataset, instead of unexpected error | fixes bug at https://github.com/huggingface/dataset-viewer/issues/2688#issuecomment-2303305529
| return 404 when accessing a renamed dataset, instead of unexpected error: fixes bug at https://github.com/huggingface/dataset-viewer/issues/2688#issuecomment-2303305529
| closed | 2024-08-22T00:07:40Z | 2024-08-22T00:11:13Z | 2024-08-22T00:11:11Z | severo |
2,478,234,702 | import from the correct file | see
https://github.com/huggingface/huggingface_hub/pull/2474#issuecomment-2302132315 | import from the correct file: see
https://github.com/huggingface/huggingface_hub/pull/2474#issuecomment-2302132315 | closed | 2024-08-21T14:48:30Z | 2024-08-21T15:22:08Z | 2024-08-21T15:22:06Z | severo |
2,478,105,982 | Fix canonical dataset names | I replaced all the canonical datasets I could find in openapi.json and in the presidio list.
Possibly we have more in the code (tests? docs?). For a next pass | Fix canonical dataset names: I replaced all the canonical datasets I could find in openapi.json and in the presidio list.
Possibly we have more in the code (tests? docs?). For a next pass | closed | 2024-08-21T13:52:34Z | 2024-08-21T15:21:45Z | 2024-08-21T15:21:43Z | severo |
2,477,650,983 | mediatype of opus is audio/ogg | I had set it to `audio/opus` which does not exist.
https://datatracker.ietf.org/doc/html/rfc7845.html#section-9
> An "Ogg Opus file" consists of one or more sequentially multiplexed
segments, each containing exactly one Ogg Opus stream. The
RECOMMENDED mime-type for Ogg Opus files is "audio/ogg".
> The RECOMMENDED filename extension for Ogg Opus files is '.opus'.
> When Opus is concurrently multiplexed with other streams in an Ogg
container, one SHOULD use one of the "audio/ogg", "video/ogg", or
"application/ogg" mime-types, as defined in [[RFC5334](https://datatracker.ietf.org/doc/html/rfc5334)].
https://github.com/user-attachments/assets/caed4ee5-975d-4233-9cf8-31fd755b2429
| mediatype of opus is audio/ogg: I had set it to `audio/opus` which does not exist.
https://datatracker.ietf.org/doc/html/rfc7845.html#section-9
> An "Ogg Opus file" consists of one or more sequentially multiplexed
segments, each containing exactly one Ogg Opus stream. The
RECOMMENDED mime-type for Ogg Opus files is "audio/ogg".
> The RECOMMENDED filename extension for Ogg Opus files is '.opus'.
> When Opus is concurrently multiplexed with other streams in an Ogg
container, one SHOULD use one of the "audio/ogg", "video/ogg", or
"application/ogg" mime-types, as defined in [[RFC5334](https://datatracker.ietf.org/doc/html/rfc5334)].
https://github.com/user-attachments/assets/caed4ee5-975d-4233-9cf8-31fd755b2429
| closed | 2024-08-21T10:17:43Z | 2024-08-21T13:18:01Z | 2024-08-21T10:39:11Z | severo |
2,477,650,534 | Update to new dataset ID in example | At the moment, the examples fail because rotten_tomatoes moved to https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes | Update to new dataset ID in example: At the moment, the examples fail because rotten_tomatoes moved to https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes | closed | 2024-08-21T10:17:31Z | 2024-08-21T13:31:54Z | 2024-08-21T13:31:34Z | davanstrien |
2,476,445,158 | [nit] admin and sse-api get requests from ALB on /healthcheck | <img width="680" alt="Capture dโeฬcran 2024-08-20 aฬ 22 01 39" src="https://github.com/user-attachments/assets/0a63090e-4fe0-4a26-9052-20cc07a844d3">
<img width="673" alt="Capture dโeฬcran 2024-08-20 aฬ 22 01 26" src="https://github.com/user-attachments/assets/aa4d2ad6-6721-435f-878c-a7d2abb08e7b">
| [nit] admin and sse-api get requests from ALB on /healthcheck: <img width="680" alt="Capture dโeฬcran 2024-08-20 aฬ 22 01 39" src="https://github.com/user-attachments/assets/0a63090e-4fe0-4a26-9052-20cc07a844d3">
<img width="673" alt="Capture dโeฬcran 2024-08-20 aฬ 22 01 26" src="https://github.com/user-attachments/assets/aa4d2ad6-6721-435f-878c-a7d2abb08e7b">
| closed | 2024-08-20T20:00:48Z | 2024-08-20T20:15:49Z | 2024-08-20T20:11:03Z | severo |
2,476,112,311 | add ingress for /sse | https://datasets-server.us.dev.moon.huggingface.tech/sse/healthcheck or https://datasets-server.us.dev.moon.huggingface.tech/sse/hub-cache are currently returning 404 not found | add ingress for /sse: https://datasets-server.us.dev.moon.huggingface.tech/sse/healthcheck or https://datasets-server.us.dev.moon.huggingface.tech/sse/hub-cache are currently returning 404 not found | closed | 2024-08-20T16:52:19Z | 2024-08-20T16:53:00Z | 2024-08-20T16:52:59Z | severo |
2,475,138,672 | Use huggingface_hub to access /auth-check | See https://github.com/huggingface/huggingface_hub/issues/2466
Occurrences: https://github.com/search?q=repo%3Ahuggingface%2Fdataset-viewer%20auth-check&type=code | Use huggingface_hub to access /auth-check: See https://github.com/huggingface/huggingface_hub/issues/2466
Occurrences: https://github.com/search?q=repo%3Ahuggingface%2Fdataset-viewer%20auth-check&type=code | open | 2024-08-20T09:17:33Z | 2024-09-18T09:09:36Z | null | severo |
2,473,536,178 | Imagefolder: UnexpectedError with root cause: "[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/....incomplete'" | We have some "UnexpectedError" with this kind of root cause:
https://huggingface.co/datasets/abhi1505/Drone_Data/discussions/9
```
[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/59146511988270-config-parquet-and-info-abhi1505-Drone_Data-6928f977/downloads/a3d6d38d9159fb9d6429f28224c80ecc7a281b095c77183e4381751ea5f3ae72.incomplete'
```
https://huggingface.co/datasets/griffinbholt/augmented_waste_classification/discussions/1
```
[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/26961854788134-config-parquet-and-info-griffinbholt-augmented_wa-56266047/downloads/28fcc0b914dfa1c8da84fbbad2f870ba1dcffb9ca308dab7c4347a9a2a930a5f.incomplete
``` | Imagefolder: UnexpectedError with root cause: "[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/....incomplete'": We have some "UnexpectedError" with this kind of root cause:
https://huggingface.co/datasets/abhi1505/Drone_Data/discussions/9
```
[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/59146511988270-config-parquet-and-info-abhi1505-Drone_Data-6928f977/downloads/a3d6d38d9159fb9d6429f28224c80ecc7a281b095c77183e4381751ea5f3ae72.incomplete'
```
https://huggingface.co/datasets/griffinbholt/augmented_waste_classification/discussions/1
```
[Errno 13] Permission denied: '/tmp/hf-datasets-cache/medium/datasets/26961854788134-config-parquet-and-info-griffinbholt-augmented_wa-56266047/downloads/28fcc0b914dfa1c8da84fbbad2f870ba1dcffb9ca308dab7c4347a9a2a930a5f.incomplete
``` | open | 2024-08-19T14:41:36Z | 2024-08-22T14:05:17Z | null | severo |
2,473,134,535 | Fix CI worker tests for gated datasets | Fix CI worker tests for gated datasets.
Fix #3025. | Fix CI worker tests for gated datasets: Fix CI worker tests for gated datasets.
Fix #3025. | closed | 2024-08-19T11:31:18Z | 2024-08-20T05:57:03Z | 2024-08-20T05:57:02Z | albertvillanova |
2,473,124,702 | CI worker tests are broken for gated datasets: ConnectionError | CI worker tests are broken for gated datasets: https://github.com/huggingface/dataset-viewer/actions/runs/10319430999/job/28567795990
```
FAILED tests/job_runners/config/test_split_names.py::test_compute_split_names_from_streaming_response[gated-False-SplitNamesFromStreamingError-DatasetNotFoundError] - AssertionError: assert 'ConnectionError' == 'DatasetNotFoundError'
- DatasetNotFoundError
+ ConnectionError
FAILED tests/job_runners/dataset/test_config_names.py::test_compute_splits_response[gated-False-ConfigNamesError-DatasetNotFoundError] - AssertionError: assert 'ConnectionError' == 'DatasetNotFoundError'
- DatasetNotFoundError
+ ConnectionError
```
| CI worker tests are broken for gated datasets: ConnectionError: CI worker tests are broken for gated datasets: https://github.com/huggingface/dataset-viewer/actions/runs/10319430999/job/28567795990
```
FAILED tests/job_runners/config/test_split_names.py::test_compute_split_names_from_streaming_response[gated-False-SplitNamesFromStreamingError-DatasetNotFoundError] - AssertionError: assert 'ConnectionError' == 'DatasetNotFoundError'
- DatasetNotFoundError
+ ConnectionError
FAILED tests/job_runners/dataset/test_config_names.py::test_compute_splits_response[gated-False-ConfigNamesError-DatasetNotFoundError] - AssertionError: assert 'ConnectionError' == 'DatasetNotFoundError'
- DatasetNotFoundError
+ ConnectionError
```
| closed | 2024-08-19T11:25:45Z | 2024-08-20T05:57:03Z | 2024-08-20T05:57:03Z | albertvillanova |
2,471,073,549 | Upgrade to [email protected] | https://github.com/huggingface/datasets/releases/tag/2.21.0
When done, we should refresh some datasets, like https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1#66bcd7e2f1685a3ade2e55f5 | Upgrade to [email protected]: https://github.com/huggingface/datasets/releases/tag/2.21.0
When done, we should refresh some datasets, like https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1#66bcd7e2f1685a3ade2e55f5 | closed | 2024-08-16T21:34:16Z | 2024-09-26T14:44:56Z | 2024-09-26T14:44:55Z | severo |
2,470,439,282 | fix anchor | found using https://github.com/raviqqe/muffet while checking for broken links
```
muffet --rate-limit 5 -i "https://huggingface.co/docs/dataset-viewer/*" https://huggingface.co/docs/dataset-viewer
``` | fix anchor: found using https://github.com/raviqqe/muffet while checking for broken links
```
muffet --rate-limit 5 -i "https://huggingface.co/docs/dataset-viewer/*" https://huggingface.co/docs/dataset-viewer
``` | closed | 2024-08-16T14:56:43Z | 2024-08-16T14:57:59Z | 2024-08-16T14:57:45Z | severo |
2,469,571,525 | Fix CI e2e admin test_metrics with missing dataset_status | Fix CI e2e admin test_metrics by adding missing `dataset_status="normal"` label.
This CI failure was introduced by:
- #3008
Fix #3021. | Fix CI e2e admin test_metrics with missing dataset_status: Fix CI e2e admin test_metrics by adding missing `dataset_status="normal"` label.
This CI failure was introduced by:
- #3008
Fix #3021. | closed | 2024-08-16T06:15:00Z | 2024-08-16T13:59:08Z | 2024-08-16T08:32:31Z | albertvillanova |
2,469,561,514 | CI e2e admin test_metrics is broken | CI e2e test_metrics is broken: https://github.com/huggingface/dataset-viewer/actions/runs/10319430994/job/28567795517
```
FAILED tests/test_31_admin_metrics.py::test_metrics - AssertionError: queue_jobs_total - queue=dataset-config-names found in {'starlette_requests_total{method="GET",path_template="/admin"}': 3.0, 'starlette_responses_total{method="GET",path_template="/admin",status_code="200"}': 2.0, 'starlette_requests_processing_time_seconds_sum{method="GET",path_template="/admin"}': 0.021941476999927545, 'starlette_requests_processing_time_seconds_bucket{le="0.005",method="GET",path_template="/admin"}': 1.0, 'starlette_requests_processing_time_seconds_bucket{le="0.01",method="GET",path_template="/admin"}': 1.0, 'starlette_requests_processing_time_seconds_bucket{le="0.025",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.05",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.075",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.1",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.25",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.5",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.75",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="1.0",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="2.5",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="5.0",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="7.5",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="10.0",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="+Inf",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_count{method="GET",path_template="/admin"}': 2.0, 'starlette_requests_in_progress{method="GET",path_template="/admin",pid="145"}': 1.0, 'starlette_requests_in_progress{method="GET",path_template="/admin",pid="23"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-config-names",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-filetypes",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-config-names",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-and-info",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-parquet",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-info",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-split-names",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-size",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-split-names",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-is-valid",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-opt-in-out-urls-count",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-duckdb-index-size",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-filetypes",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-modalities",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-duckdb-index-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-info",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-compatible-libraries",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-croissant-crumbs",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-is-valid",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-hub-cache",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-opt-in-out-urls-count",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-parquet",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-split-names",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-presidio-entities-count",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-split-names",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-first-rows",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-is-valid",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-opt-in-out-urls-count",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-and-info",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-info",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-size",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-modalities",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-compatible-libraries",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-croissant-crumbs",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-hub-cache",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-presidio-entities-count",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-is-valid",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-opt-in-out-urls-count",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-first-rows",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-is-valid",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-image-url-columns",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-info",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-metadata",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-is-valid",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-image-url-columns",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-scan",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-metadata",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-descriptive-statistics",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-presidio-scan",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-duckdb-index",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-scan",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-count",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-descriptive-statistics",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-duckdb-index",status="started"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-presidio-scan",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-count",status="started"}': 0.0, 'worker_size_jobs_count{pid="145",worker_size="medium"}': 1.0, 'worker_size_jobs_count{pid="145",worker_size="light"}': 8.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-config-names",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-filetypes",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-duckdb-index-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-info",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-is-valid",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-opt-in-out-urls-count",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-parquet",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-split-names",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-split-names",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-parquet-and-info",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-modalities",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-compatible-libraries",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-croissant-crumbs",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-hub-cache",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-presidio-entities-count",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-duckdb-index-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-is-valid",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-opt-in-out-urls-count",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-first-rows",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-info",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-parquet",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-is-valid",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-image-url-columns",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-parquet-metadata",pid="145"}': 5.0, 'responses_in_cache_total{error_code="MissingSpawningTokenError",http_status="500",kind="split-opt-in-out-urls-scan",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-descriptive-statistics",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-duckdb-index",pid="145"}': 2.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-presidio-scan",pid="145"}': 2.0, 'responses_in_cache_total{error_code="MissingSpawningTokenError",http_status="500",kind="split-opt-in-out-urls-count",pid="145"}': 6.0, 'responses_in_cache_total{error_code="FileSystemError",http_status="500",kind="config-parquet-metadata",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="dataset-compatible-libraries",pid="145"}': 4.0, 'responses_in_cache_total{error_code="InfoError",http_status="500",kind="split-first-rows",pid="145"}': 2.0, 'responses_in_cache_total{error_code="FileSystemError",http_status="500",kind="split-descriptive-statistics",pid="145"}': 2.0, 'responses_in_cache_total{error_code="FileSystemError",http_status="500",kind="split-duckdb-index",pid="145"}': 2.0, 'responses_in_cache_total{error_code="PresidioScanNotEnabledForThisDataset",http_status="501",kind="split-presidio-scan",pid="145"}': 3.0, 'responses_in_cache_total{error_code="InfoError",http_status="500",kind="split-image-url-columns",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-first-rows",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-descriptive-statistics",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-duckdb-index",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-image-url-columns",pid="145"}': 2.0, 'responses_in_cache_total{error_code="StreamingRowsError",http_status="500",kind="split-first-rows",pid="145"}': 0.0, 'responses_in_cache_total{error_code="StreamingRowsError",http_status="500",kind="split-image-url-columns",pid="145"}': 0.0, 'responses_in_cache_total{error_code="NormalRowsError",http_status="500",kind="split-presidio-scan",pid="145"}': 1.0, 'parquet_metadata_disk_usage{pid="145",type="total"}': 77851254784.0, 'parquet_metadata_disk_usage{pid="145",type="used"}': 76050628608.0, 'parquet_metadata_disk_usage{pid="145",type="free"}': 1783848960.0, 'parquet_metadata_disk_usage{pid="145",type="percent"}': 97.7}
assert False
+ where False = has_metric(name='queue_jobs_total', labels={'pid': '[0-9]*', 'queue': 'dataset-config-names', 'status': 'started'}, metric_names={'parquet_metadata_disk_usage{pid="145",type="free"}', 'parquet_metadata_disk_usage{pid="145",type="percent"}', 'parquet_metadata_disk_usage{pid="145",type="total"}', 'parquet_metadata_disk_usage{pid="145",type="used"}', 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="started"}', 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="waiting"}', ...})
```
I think this failure was introduced after merging: https://github.com/huggingface/dataset-viewer/actions/runs/10182877247/job/28845665481
- #3008 | CI e2e admin test_metrics is broken: CI e2e test_metrics is broken: https://github.com/huggingface/dataset-viewer/actions/runs/10319430994/job/28567795517
```
FAILED tests/test_31_admin_metrics.py::test_metrics - AssertionError: queue_jobs_total - queue=dataset-config-names found in {'starlette_requests_total{method="GET",path_template="/admin"}': 3.0, 'starlette_responses_total{method="GET",path_template="/admin",status_code="200"}': 2.0, 'starlette_requests_processing_time_seconds_sum{method="GET",path_template="/admin"}': 0.021941476999927545, 'starlette_requests_processing_time_seconds_bucket{le="0.005",method="GET",path_template="/admin"}': 1.0, 'starlette_requests_processing_time_seconds_bucket{le="0.01",method="GET",path_template="/admin"}': 1.0, 'starlette_requests_processing_time_seconds_bucket{le="0.025",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.05",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.075",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.1",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.25",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.5",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="0.75",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="1.0",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="2.5",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="5.0",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="7.5",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="10.0",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_bucket{le="+Inf",method="GET",path_template="/admin"}': 2.0, 'starlette_requests_processing_time_seconds_count{method="GET",path_template="/admin"}': 2.0, 'starlette_requests_in_progress{method="GET",path_template="/admin",pid="145"}': 1.0, 'starlette_requests_in_progress{method="GET",path_template="/admin",pid="23"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-config-names",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-filetypes",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-config-names",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-and-info",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-parquet",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-info",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-split-names",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-size",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-split-names",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-is-valid",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-opt-in-out-urls-count",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-duckdb-index-size",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-filetypes",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-modalities",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-duckdb-index-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-info",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-compatible-libraries",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-croissant-crumbs",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-is-valid",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-hub-cache",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-opt-in-out-urls-count",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-parquet",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-split-names",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-presidio-entities-count",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-split-names",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-first-rows",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-is-valid",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-opt-in-out-urls-count",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-and-info",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-info",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-size",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-modalities",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-compatible-libraries",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-croissant-crumbs",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-hub-cache",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="dataset-presidio-entities-count",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-is-valid",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-opt-in-out-urls-count",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-first-rows",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-is-valid",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-image-url-columns",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-info",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-metadata",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-size",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-is-valid",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-image-url-columns",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-scan",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-parquet-metadata",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-descriptive-statistics",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-presidio-scan",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-duckdb-index",status="waiting"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-scan",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-count",status="waiting"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-descriptive-statistics",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-duckdb-index",status="started"}': 1.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-presidio-scan",status="started"}': 0.0, 'queue_jobs_total{dataset_status="normal",pid="145",queue="split-opt-in-out-urls-count",status="started"}': 0.0, 'worker_size_jobs_count{pid="145",worker_size="medium"}': 1.0, 'worker_size_jobs_count{pid="145",worker_size="light"}': 8.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-config-names",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-filetypes",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-duckdb-index-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-info",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-is-valid",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-opt-in-out-urls-count",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-parquet",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-split-names",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-split-names",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-parquet-and-info",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-modalities",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-compatible-libraries",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-croissant-crumbs",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-hub-cache",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="dataset-presidio-entities-count",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-duckdb-index-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-is-valid",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-opt-in-out-urls-count",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-first-rows",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-info",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-parquet",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-size",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-is-valid",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-image-url-columns",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="config-parquet-metadata",pid="145"}': 5.0, 'responses_in_cache_total{error_code="MissingSpawningTokenError",http_status="500",kind="split-opt-in-out-urls-scan",pid="145"}': 7.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-descriptive-statistics",pid="145"}': 3.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-duckdb-index",pid="145"}': 2.0, 'responses_in_cache_total{error_code="None",http_status="200",kind="split-presidio-scan",pid="145"}': 2.0, 'responses_in_cache_total{error_code="MissingSpawningTokenError",http_status="500",kind="split-opt-in-out-urls-count",pid="145"}': 6.0, 'responses_in_cache_total{error_code="FileSystemError",http_status="500",kind="config-parquet-metadata",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="dataset-compatible-libraries",pid="145"}': 4.0, 'responses_in_cache_total{error_code="InfoError",http_status="500",kind="split-first-rows",pid="145"}': 2.0, 'responses_in_cache_total{error_code="FileSystemError",http_status="500",kind="split-descriptive-statistics",pid="145"}': 2.0, 'responses_in_cache_total{error_code="FileSystemError",http_status="500",kind="split-duckdb-index",pid="145"}': 2.0, 'responses_in_cache_total{error_code="PresidioScanNotEnabledForThisDataset",http_status="501",kind="split-presidio-scan",pid="145"}': 3.0, 'responses_in_cache_total{error_code="InfoError",http_status="500",kind="split-image-url-columns",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-first-rows",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-descriptive-statistics",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-duckdb-index",pid="145"}': 2.0, 'responses_in_cache_total{error_code="UnexpectedError",http_status="500",kind="split-image-url-columns",pid="145"}': 2.0, 'responses_in_cache_total{error_code="StreamingRowsError",http_status="500",kind="split-first-rows",pid="145"}': 0.0, 'responses_in_cache_total{error_code="StreamingRowsError",http_status="500",kind="split-image-url-columns",pid="145"}': 0.0, 'responses_in_cache_total{error_code="NormalRowsError",http_status="500",kind="split-presidio-scan",pid="145"}': 1.0, 'parquet_metadata_disk_usage{pid="145",type="total"}': 77851254784.0, 'parquet_metadata_disk_usage{pid="145",type="used"}': 76050628608.0, 'parquet_metadata_disk_usage{pid="145",type="free"}': 1783848960.0, 'parquet_metadata_disk_usage{pid="145",type="percent"}': 97.7}
assert False
+ where False = has_metric(name='queue_jobs_total', labels={'pid': '[0-9]*', 'queue': 'dataset-config-names', 'status': 'started'}, metric_names={'parquet_metadata_disk_usage{pid="145",type="free"}', 'parquet_metadata_disk_usage{pid="145",type="percent"}', 'parquet_metadata_disk_usage{pid="145",type="total"}', 'parquet_metadata_disk_usage{pid="145",type="used"}', 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="started"}', 'queue_jobs_total{dataset_status="normal",pid="145",queue="config-duckdb-index-size",status="waiting"}', ...})
```
I think this failure was introduced after merging: https://github.com/huggingface/dataset-viewer/actions/runs/10182877247/job/28845665481
- #3008 | closed | 2024-08-16T06:07:49Z | 2024-08-16T08:32:32Z | 2024-08-16T08:32:32Z | albertvillanova |
2,467,918,813 | doc: Read parquet files with PySpark | Adding primary doc for reading parquet files with PySpark | doc: Read parquet files with PySpark : Adding primary doc for reading parquet files with PySpark | closed | 2024-08-15T12:00:40Z | 2024-08-19T12:05:18Z | 2024-08-19T12:05:17Z | AndreaFrancis |
2,460,769,263 | Fix CI test StorageClient.url_preparator | Fix CI test StorageClient.url_preparator.
This fail in CI tests was introduced after:
- #2966
See: https://github.com/huggingface/dataset-viewer/actions/runs/10350752694/job/28648303009?pr=3018
```
FAILED tests/test_response.py::test_create_response_with_image - AssertionError: assert [{'row': {'im...d_cells': []}] == [{'row': {'im...d_cells': []}]
At index 0 diff: {'row_idx': 0, 'row': {'image': {'src': 'http://localhost/cached-assets/ds_image/--/{dataset_git_revision}/--/default/train/0/image/image.jpg', 'height': 480, 'width': 640}}, 'truncated_cells': []} != {'row_idx': 0, 'row': {'image': {'src': 'http://localhost/cached-assets/ds_image/--/revision/--/default/train/0/image/image.jpg', 'height': 480, 'width': 640}}, 'truncated_cells': []}
Full diff:
[
{
'row': {
'image': {
'height': 480,
- 'src': 'http://localhost/cached-assets/ds_image/--/revision/--/default/train/0/image/image.jpg',
+ 'src': 'http://localhost/cached-assets/ds_image/--/{dataset_git_revision}/--/default/train/0/image/image.jpg',
? +++++++++++++ +
'width': 640,
},
},
'row_idx': 0,
'truncated_cells': [],
},
]
``` | Fix CI test StorageClient.url_preparator: Fix CI test StorageClient.url_preparator.
This fail in CI tests was introduced after:
- #2966
See: https://github.com/huggingface/dataset-viewer/actions/runs/10350752694/job/28648303009?pr=3018
```
FAILED tests/test_response.py::test_create_response_with_image - AssertionError: assert [{'row': {'im...d_cells': []}] == [{'row': {'im...d_cells': []}]
At index 0 diff: {'row_idx': 0, 'row': {'image': {'src': 'http://localhost/cached-assets/ds_image/--/{dataset_git_revision}/--/default/train/0/image/image.jpg', 'height': 480, 'width': 640}}, 'truncated_cells': []} != {'row_idx': 0, 'row': {'image': {'src': 'http://localhost/cached-assets/ds_image/--/revision/--/default/train/0/image/image.jpg', 'height': 480, 'width': 640}}, 'truncated_cells': []}
Full diff:
[
{
'row': {
'image': {
'height': 480,
- 'src': 'http://localhost/cached-assets/ds_image/--/revision/--/default/train/0/image/image.jpg',
+ 'src': 'http://localhost/cached-assets/ds_image/--/{dataset_git_revision}/--/default/train/0/image/image.jpg',
? +++++++++++++ +
'width': 640,
},
},
'row_idx': 0,
'truncated_cells': [],
},
]
``` | closed | 2024-08-12T11:43:04Z | 2024-08-16T09:13:11Z | 2024-08-16T09:13:09Z | albertvillanova |
2,460,700,710 | Update aiohttp 3.10.2 min version to fix vulnerability | Update aiohttp 3.10.2 min version to fix vulnerability.
Fix 12 dependabot alerts. | Update aiohttp 3.10.2 min version to fix vulnerability: Update aiohttp 3.10.2 min version to fix vulnerability.
Fix 12 dependabot alerts. | closed | 2024-08-12T11:10:10Z | 2024-08-20T06:19:05Z | 2024-08-20T06:19:04Z | albertvillanova |
2,459,789,625 | Re-enable Polars as a supported library for jsonl with glob paths | Ref https://github.com/huggingface/dataset-viewer/pull/3006
Issue fixed by https://github.com/pola-rs/polars/pull/17958
| Re-enable Polars as a supported library for jsonl with glob paths: Ref https://github.com/huggingface/dataset-viewer/pull/3006
Issue fixed by https://github.com/pola-rs/polars/pull/17958
| open | 2024-08-11T22:32:25Z | 2024-08-15T01:06:50Z | null | nameexhaustion |
2,457,838,489 | fix docstring | null | fix docstring: | closed | 2024-08-09T12:43:19Z | 2024-08-09T12:43:28Z | 2024-08-09T12:43:26Z | severo |
2,457,830,514 | obsolete | null | obsolete: | closed | 2024-08-09T12:38:53Z | 2024-08-09T12:39:05Z | 2024-08-09T12:39:04Z | severo |
2,444,676,341 | Remove extra `label` column | In example dataset https://huggingface.co/datasets/datasets-examples/doc-audio-4, we have an "unexpected" label column with only `null` values.
<img width="676" alt="Capture dโeฬcran 2024-08-02 aฬ 12 33 10" src="https://github.com/user-attachments/assets/e8e8a4b9-4681-4bb0-b8b4-31e4b0373a0d">
I think it's due to a "collision" between the heuristics that define splits and/or classes based on the directories. There is a `drop_labels=True` option in the datasets library, if it helps.
Ideally, in this case, we should have two splits (train and test), and no additional `label` column.
I think the issue also exists with image datasets. | Remove extra `label` column: In example dataset https://huggingface.co/datasets/datasets-examples/doc-audio-4, we have an "unexpected" label column with only `null` values.
<img width="676" alt="Capture dโeฬcran 2024-08-02 aฬ 12 33 10" src="https://github.com/user-attachments/assets/e8e8a4b9-4681-4bb0-b8b4-31e4b0373a0d">
I think it's due to a "collision" between the heuristics that define splits and/or classes based on the directories. There is a `drop_labels=True` option in the datasets library, if it helps.
Ideally, in this case, we should have two splits (train and test), and no additional `label` column.
I think the issue also exists with image datasets. | open | 2024-08-02T10:37:48Z | 2024-08-02T10:38:19Z | null | severo |
2,442,778,523 | Add links to the Hub docs | fixes #2845 | Add links to the Hub docs: fixes #2845 | closed | 2024-08-01T15:35:32Z | 2024-08-01T21:50:02Z | 2024-08-01T21:50:00Z | severo |
2,442,267,210 | Update README.md | fixes #2668 | Update README.md: fixes #2668 | closed | 2024-08-01T11:58:35Z | 2024-08-01T15:11:02Z | 2024-08-01T15:11:00Z | severo |
2,442,175,079 | Replace `DatasetGenerationError` with the underlying error? | 50K cache entries (0.5%) have the `DatasetGenerationError`. It would be better to show the underlying error, and help the user debug their data files. | Replace `DatasetGenerationError` with the underlying error?: 50K cache entries (0.5%) have the `DatasetGenerationError`. It would be better to show the underlying error, and help the user debug their data files. | closed | 2024-08-01T11:10:29Z | 2024-08-22T14:38:13Z | 2024-08-22T14:38:13Z | severo |
2,442,173,496 | Replace ConfigNamesError with the underlying error | 100K cache entries (1%) have the `ConfigNamesError`. It would be better to show the underlying error, and help the user debug their data files. | Replace ConfigNamesError with the underlying error: 100K cache entries (1%) have the `ConfigNamesError`. It would be better to show the underlying error, and help the user debug their data files. | closed | 2024-08-01T11:09:46Z | 2024-08-23T09:38:17Z | 2024-08-22T14:34:51Z | severo |
2,441,056,538 | add a field to the index + it should fix the deployment issue | deployment (migration script) error:
```
INFO: 2024-07-31 22:33:14,880 - root - Start migrations
INFO: 2024-07-31 22:33:14,926 - root - 72 migrations have already been applied. They will be skipped.
INFO: 2024-07-31 22:33:14,927 - root - Migrate 20240731143600: add to the migrations collection
INFO: 2024-07-31 22:33:14,931 - root - Migrate 20240731143600: apply
INFO: 2024-07-31 22:33:14,931 - root - If missing, add the 'dataset_status' field with the default value 'normal' to the jobs metrics
INFO: 2024-07-31 22:33:14,941 - root - Migrate 20240731143600: validate
INFO: 2024-07-31 22:33:14,941 - root - Ensure that a random selection of jobs metrics have the 'dataset_status' field
ERROR: 2024-07-31 22:33:14,943 - root - Migration failed: An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }, full error: {'ok': 0.0, 'errmsg': 'An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }', 'code': 86, 'codeName': 'IndexKeySpecsConflict', '$clusterTime': {'clusterTime': Timestamp(1722465194, 566), 'signature': {'hash': b'\xa0C\xe2\xd5;Z\xfd:8\x8d\xfe\x0fX\x1a\xbd\x87\x94D\xe3\xd4', 'keyId': 7345684769667022921}}, 'operationTime': Timestamp(1722465194, 566)}
INFO: 2024-07-31 22:33:14,943 - root - Start rollback
INFO: 2024-07-31 22:33:14,943 - root - Rollback 20240731143600: roll back
INFO: 2024-07-31 22:33:14,943 - root - Remove the 'dataset_status' field from all the jobs metrics
INFO: 2024-07-31 22:33:14,952 - root - Rollback 20240731143600: removed from the migrations collection
INFO: 2024-07-31 22:33:14,956 - root - Rollback 20240731143600: done
INFO: 2024-07-31 22:33:14,956 - root - All executed migrations have been rolled back
ERROR: 2024-07-31 22:33:14,959 - root - An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }, full error: {'ok': 0.0, 'errmsg': 'An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }', 'code': 86, 'codeName': 'IndexKeySpecsConflict', '$clusterTime': {'clusterTime': Timestamp(1722465194, 566), 'signature': {'hash': b'\xa0C\xe2\xd5;Z\xfd:8\x8d\xfe\x0fX\x1a\xbd\x87\x94D\xe3\xd4', 'keyId': 7345684769667022921}}, 'operationTime': Timestamp(1722465194, 566)}
``` | add a field to the index + it should fix the deployment issue: deployment (migration script) error:
```
INFO: 2024-07-31 22:33:14,880 - root - Start migrations
INFO: 2024-07-31 22:33:14,926 - root - 72 migrations have already been applied. They will be skipped.
INFO: 2024-07-31 22:33:14,927 - root - Migrate 20240731143600: add to the migrations collection
INFO: 2024-07-31 22:33:14,931 - root - Migrate 20240731143600: apply
INFO: 2024-07-31 22:33:14,931 - root - If missing, add the 'dataset_status' field with the default value 'normal' to the jobs metrics
INFO: 2024-07-31 22:33:14,941 - root - Migrate 20240731143600: validate
INFO: 2024-07-31 22:33:14,941 - root - Ensure that a random selection of jobs metrics have the 'dataset_status' field
ERROR: 2024-07-31 22:33:14,943 - root - Migration failed: An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }, full error: {'ok': 0.0, 'errmsg': 'An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }', 'code': 86, 'codeName': 'IndexKeySpecsConflict', '$clusterTime': {'clusterTime': Timestamp(1722465194, 566), 'signature': {'hash': b'\xa0C\xe2\xd5;Z\xfd:8\x8d\xfe\x0fX\x1a\xbd\x87\x94D\xe3\xd4', 'keyId': 7345684769667022921}}, 'operationTime': Timestamp(1722465194, 566)}
INFO: 2024-07-31 22:33:14,943 - root - Start rollback
INFO: 2024-07-31 22:33:14,943 - root - Rollback 20240731143600: roll back
INFO: 2024-07-31 22:33:14,943 - root - Remove the 'dataset_status' field from all the jobs metrics
INFO: 2024-07-31 22:33:14,952 - root - Rollback 20240731143600: removed from the migrations collection
INFO: 2024-07-31 22:33:14,956 - root - Rollback 20240731143600: done
INFO: 2024-07-31 22:33:14,956 - root - All executed migrations have been rolled back
ERROR: 2024-07-31 22:33:14,959 - root - An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }, full error: {'ok': 0.0, 'errmsg': 'An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false }, existing index: { v: 2, unique: true, key: { job_type: 1, status: 1 }, name: "job_type_1_status_1", background: false, sparse: false }', 'code': 86, 'codeName': 'IndexKeySpecsConflict', '$clusterTime': {'clusterTime': Timestamp(1722465194, 566), 'signature': {'hash': b'\xa0C\xe2\xd5;Z\xfd:8\x8d\xfe\x0fX\x1a\xbd\x87\x94D\xe3\xd4', 'keyId': 7345684769667022921}}, 'operationTime': Timestamp(1722465194, 566)}
``` | closed | 2024-07-31T22:36:47Z | 2024-07-31T22:36:59Z | 2024-07-31T22:36:57Z | severo |
2,439,978,856 | add dataset_status='normal|blocked' to job metrics | It will help show more precise graphs in Grafana, separating the jobs for blocked datasets from the rest of the jobs.
Two notes:
- the lookup for jobs in blocked datasets is maybe not optimal
- we only take this into account in the cron job that recomputes the metrics, not in every job creation/update/deletion (ie: every 10 minutes). I think it's good enough | add dataset_status='normal|blocked' to job metrics: It will help show more precise graphs in Grafana, separating the jobs for blocked datasets from the rest of the jobs.
Two notes:
- the lookup for jobs in blocked datasets is maybe not optimal
- we only take this into account in the cron job that recomputes the metrics, not in every job creation/update/deletion (ie: every 10 minutes). I think it's good enough | closed | 2024-07-31T12:48:01Z | 2024-07-31T15:12:10Z | 2024-07-31T15:12:08Z | severo |
2,439,957,953 | Stats for datetimes | null | Stats for datetimes: | open | 2024-07-31T12:38:13Z | 2024-08-12T11:26:03Z | null | polinaeterna |
2,439,654,893 | fix 'FileNotFoundError' in polars detection function | We simply don't put Polars as a supported library when we're not able to detect if it's JSON or JSONL.
TODO: transform `**/*` to a list of files? In the case of `datasets/tcor0005/langchain-docs-400-chunksize`, it's a single file
---
The error we fix:
```
tests/job_runners/dataset/test_compatible_libraries.py:564:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/worker/job_runners/dataset/compatible_libraries.py:700: in get_polars_compatible_library
is_json_lines = ".jsonl" in first_file or HfFileSystem(token=hf_token).open(first_file, "r").read(1) != "["
.venv/lib/python3.9/site-packages/fsspec/spec.py:1281: in open
self.open(
.venv/lib/python3.9/site-packages/fsspec/spec.py:1293: in open
f = self._open(
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:236: in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:687: in __init__
self.details = fs.info(self.resolved_path.unresolve(), expand_info=False)
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:520: in info
self.ls(parent_path, expand_info=False)
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:294: in ls
_raise_file_not_found(path, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = 'datasets/tcor0005/langchain-docs-400-chunksize/**', err = None
def _raise_file_not_found(path: str, err: Optional[Exception]) -> NoReturn:
msg = path
if isinstance(err, RepositoryNotFoundError):
msg = f"{path} (repository not found)"
elif isinstance(err, RevisionNotFoundError):
msg = f"{path} (revision not found)"
elif isinstance(err, HFValidationError):
msg = f"{path} (invalid repository id)"
> raise FileNotFoundError(msg) from err
E FileNotFoundError: datasets/tcor0005/langchain-docs-400-chunksize/**
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:868: FileNotFoundError
``` | fix 'FileNotFoundError' in polars detection function: We simply don't put Polars as a supported library when we're not able to detect if it's JSON or JSONL.
TODO: transform `**/*` to a list of files? In the case of `datasets/tcor0005/langchain-docs-400-chunksize`, it's a single file
---
The error we fix:
```
tests/job_runners/dataset/test_compatible_libraries.py:564:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/worker/job_runners/dataset/compatible_libraries.py:700: in get_polars_compatible_library
is_json_lines = ".jsonl" in first_file or HfFileSystem(token=hf_token).open(first_file, "r").read(1) != "["
.venv/lib/python3.9/site-packages/fsspec/spec.py:1281: in open
self.open(
.venv/lib/python3.9/site-packages/fsspec/spec.py:1293: in open
f = self._open(
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:236: in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:687: in __init__
self.details = fs.info(self.resolved_path.unresolve(), expand_info=False)
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:520: in info
self.ls(parent_path, expand_info=False)
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:294: in ls
_raise_file_not_found(path, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = 'datasets/tcor0005/langchain-docs-400-chunksize/**', err = None
def _raise_file_not_found(path: str, err: Optional[Exception]) -> NoReturn:
msg = path
if isinstance(err, RepositoryNotFoundError):
msg = f"{path} (repository not found)"
elif isinstance(err, RevisionNotFoundError):
msg = f"{path} (revision not found)"
elif isinstance(err, HFValidationError):
msg = f"{path} (invalid repository id)"
> raise FileNotFoundError(msg) from err
E FileNotFoundError: datasets/tcor0005/langchain-docs-400-chunksize/**
.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py:868: FileNotFoundError
``` | closed | 2024-07-31T10:05:52Z | 2024-07-31T11:44:47Z | 2024-07-31T10:40:12Z | severo |
2,438,220,970 | remove code for 'manual download' script datasets | fixes #2478 | remove code for 'manual download' script datasets: fixes #2478 | closed | 2024-07-30T16:23:17Z | 2024-07-31T08:23:01Z | 2024-07-31T08:22:59Z | severo |
2,438,216,759 | Remove allow list for script-based datasets | https://github.com/huggingface/dataset-viewer/blob/5c5be128c205ae5c1f55440178497db564d81868/chart/values.yaml#L83-L86
Then remove the code that allow script-based datasets in this repo + exception `DatasetModuleNotInstalledError`. Related to #2478.
| Remove allow list for script-based datasets: https://github.com/huggingface/dataset-viewer/blob/5c5be128c205ae5c1f55440178497db564d81868/chart/values.yaml#L83-L86
Then remove the code that allow script-based datasets in this repo + exception `DatasetModuleNotInstalledError`. Related to #2478.
| closed | 2024-07-30T16:20:49Z | 2024-08-26T08:10:05Z | 2024-08-26T08:10:05Z | severo |
2,437,703,019 | Fix json compatible libraries | json datasets currently don't have a "Use this dataset" button because the compatible-libraries job fails
note that this doesn't impact datasets with .jsonl files | Fix json compatible libraries: json datasets currently don't have a "Use this dataset" button because the compatible-libraries job fails
note that this doesn't impact datasets with .jsonl files | closed | 2024-07-30T12:19:46Z | 2024-07-30T12:28:51Z | 2024-07-30T12:28:50Z | lhoestq |
2,432,448,352 | increment version to recompute dataset-compatible-libraries | null | increment version to recompute dataset-compatible-libraries: | closed | 2024-07-26T15:12:47Z | 2024-07-26T15:13:01Z | 2024-07-26T15:12:59Z | severo |
2,430,004,954 | otherwise we have dataset_name: 'parquet' | Not sure what to with the existing entries (we don't want to recompute all the entries).
Alternative: remove the field? | otherwise we have dataset_name: 'parquet': Not sure what to with the existing entries (we don't want to recompute all the entries).
Alternative: remove the field? | closed | 2024-07-25T13:39:49Z | 2024-07-25T16:01:38Z | 2024-07-25T16:01:37Z | severo |
2,429,632,916 | publish dataset-hub-cache at /hub-cache?dataset=... | useful for local Hub development | publish dataset-hub-cache at /hub-cache?dataset=...: useful for local Hub development | closed | 2024-07-25T10:40:36Z | 2024-07-25T10:43:41Z | 2024-07-25T10:43:39Z | severo |
2,429,466,797 | Catch another UnexpectedException | https://discuss.huggingface.co/t/strange-problems-with-datasets-server/43871/8
https://huggingface.co/datasets/rippleripple/ProbMed
The error was in step `config-parquet-and-info`:
```
{
"error": "(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2529cb6c-46ac-4c17-8e7a-44488f47d04e)')",
"cause_exception": "ConnectionError",
"cause_message": "(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2529cb6c-46ac-4c17-8e7a-44488f47d04e)')",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 467, in _make_request\n six.raise_from(e, None)\n",
" File \"<string>\", line 3, in raise_from\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 462, in _make_request\n httplib_response = conn.getresponse()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 1377, in getresponse\n response.begin()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 320, in begin\n version, status, reason = self._read_status()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 289, in _read_status\n raise RemoteDisconnected(\"Remote end closed connection without\"\n",
"http.client.RemoteDisconnected: Remote end closed connection without response\n",
"\nDuring handling of the above exception, another exception occurred:\n\n",
"Traceback (most recent call last):\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py\", line 589, in send\n resp = conn.urlopen(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 801, in urlopen\n retries = retries.increment(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py\", line 552, in increment\n raise six.reraise(type(error), error, _stacktrace)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/packages/six.py\", line 769, in reraise\n raise value.with_traceback(tb)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 467, in _make_request\n six.raise_from(e, None)\n",
" File \"<string>\", line 3, in raise_from\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 462, in _make_request\n httplib_response = conn.getresponse()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 1377, in getresponse\n response.begin()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 320, in begin\n version, status, reason = self._read_status()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 289, in _read_status\n raise RemoteDisconnected(\"Remote end closed connection without\"\n",
"urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))\n",
"\nDuring handling of the above exception, another exception occurred:\n\n",
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 127, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1671, in compute\n compute_config_parquet_and_info_response(\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1577, in compute_config_parquet_and_info_response\n parquet_operations = convert_to_parquet(builder)\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1191, in convert_to_parquet\n builder.download_and_prepare(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1027, in download_and_prepare\n self._download_and_prepare(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1789, in _download_and_prepare\n super()._download_and_prepare(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py\", line 114, in _split_generators\n downloaded_files = dl_manager.download(files)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py\", line 257, in download\n downloaded_path_or_paths = map_nested(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 511, in map_nested\n mapped = [\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 512, in <listcomp>\n _single_map_nested((function, obj, batched, batch_size, types, None, True, None))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 380, in _single_map_nested\n return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 380, in <listcomp>\n return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py\", line 300, in _download_batched\n return thread_map(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1181, in __iter__\n for obj in iterable:\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/_base.py\", line 609, in result_iterator\n yield fs.pop().result()\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/_base.py\", line 446, in result\n return self.__get_result()\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/_base.py\", line 391, in __get_result\n raise self._exception\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/thread.py\", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py\", line 323, in _download_single\n out = cached_path(url_or_filename, download_config=download_config)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 201, in cached_path\n output_path = get_from_cache(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 676, in get_from_cache\n fsspec_get(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 385, in fsspec_get\n fs.get_file(path, temp_file.name, callback=callback)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py\", line 636, in get_file\n http_get(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 456, in http_get\n r = _request_wrapper(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 392, in _request_wrapper\n response = get_session().request(method=method, url=url, **params)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 724, in send\n history = [resp for resp in gen]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 724, in <listcomp>\n history = [resp for resp in gen]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 265, in resolve_redirects\n resp = self.send(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py\", line 66, in send\n return super().send(request, *args, **kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py\", line 604, in send\n raise ConnectionError(err, request=request)\n",
"requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2529cb6c-46ac-4c17-8e7a-44488f47d04e)')\n"
]
}
``` | Catch another UnexpectedException: https://discuss.huggingface.co/t/strange-problems-with-datasets-server/43871/8
https://huggingface.co/datasets/rippleripple/ProbMed
The error was in step `config-parquet-and-info`:
```
{
"error": "(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2529cb6c-46ac-4c17-8e7a-44488f47d04e)')",
"cause_exception": "ConnectionError",
"cause_message": "(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2529cb6c-46ac-4c17-8e7a-44488f47d04e)')",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 467, in _make_request\n six.raise_from(e, None)\n",
" File \"<string>\", line 3, in raise_from\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 462, in _make_request\n httplib_response = conn.getresponse()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 1377, in getresponse\n response.begin()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 320, in begin\n version, status, reason = self._read_status()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 289, in _read_status\n raise RemoteDisconnected(\"Remote end closed connection without\"\n",
"http.client.RemoteDisconnected: Remote end closed connection without response\n",
"\nDuring handling of the above exception, another exception occurred:\n\n",
"Traceback (most recent call last):\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py\", line 589, in send\n resp = conn.urlopen(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 801, in urlopen\n retries = retries.increment(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py\", line 552, in increment\n raise six.reraise(type(error), error, _stacktrace)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/packages/six.py\", line 769, in reraise\n raise value.with_traceback(tb)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 467, in _make_request\n six.raise_from(e, None)\n",
" File \"<string>\", line 3, in raise_from\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 462, in _make_request\n httplib_response = conn.getresponse()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 1377, in getresponse\n response.begin()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 320, in begin\n version, status, reason = self._read_status()\n",
" File \"/usr/local/lib/python3.9/http/client.py\", line 289, in _read_status\n raise RemoteDisconnected(\"Remote end closed connection without\"\n",
"urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))\n",
"\nDuring handling of the above exception, another exception occurred:\n\n",
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_manager.py\", line 127, in process\n job_result = self.job_runner.compute()\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1671, in compute\n compute_config_parquet_and_info_response(\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1577, in compute_config_parquet_and_info_response\n parquet_operations = convert_to_parquet(builder)\n",
" File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1191, in convert_to_parquet\n builder.download_and_prepare(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1027, in download_and_prepare\n self._download_and_prepare(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1789, in _download_and_prepare\n super()._download_and_prepare(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py\", line 114, in _split_generators\n downloaded_files = dl_manager.download(files)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py\", line 257, in download\n downloaded_path_or_paths = map_nested(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 511, in map_nested\n mapped = [\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 512, in <listcomp>\n _single_map_nested((function, obj, batched, batch_size, types, None, True, None))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 380, in _single_map_nested\n return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 380, in <listcomp>\n return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py\", line 300, in _download_batched\n return thread_map(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1181, in __iter__\n for obj in iterable:\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/_base.py\", line 609, in result_iterator\n yield fs.pop().result()\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/_base.py\", line 446, in result\n return self.__get_result()\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/_base.py\", line 391, in __get_result\n raise self._exception\n",
" File \"/usr/local/lib/python3.9/concurrent/futures/thread.py\", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py\", line 323, in _download_single\n out = cached_path(url_or_filename, download_config=download_config)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 201, in cached_path\n output_path = get_from_cache(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 676, in get_from_cache\n fsspec_get(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 385, in fsspec_get\n fs.get_file(path, temp_file.name, callback=callback)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py\", line 636, in get_file\n http_get(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 456, in http_get\n r = _request_wrapper(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 392, in _request_wrapper\n response = get_session().request(method=method, url=url, **params)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 724, in send\n history = [resp for resp in gen]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 724, in <listcomp>\n history = [resp for resp in gen]\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 265, in resolve_redirects\n resp = self.send(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py\", line 66, in send\n return super().send(request, *args, **kwargs)\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py\", line 604, in send\n raise ConnectionError(err, request=request)\n",
"requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2529cb6c-46ac-4c17-8e7a-44488f47d04e)')\n"
]
}
``` | open | 2024-07-25T09:20:17Z | 2024-07-25T09:23:17Z | null | severo |
2,428,053,115 | bump croissant job version | recompute croissant files after https://github.com/huggingface/dataset-viewer/pull/2943
cc @marcenacp | bump croissant job version: recompute croissant files after https://github.com/huggingface/dataset-viewer/pull/2943
cc @marcenacp | closed | 2024-07-24T16:51:21Z | 2024-07-25T09:27:12Z | 2024-07-25T09:27:11Z | lhoestq |
2,427,760,387 | Add Polars loading code | Hello from the Polars team! We've recently added support for scanning Hugging Face datasets, and to make it easier for users we're hoping that Polars code snippets could be added under the "Use this dataset" section on the dataset viewer webpage, next to the other libraries.
Specifically, here's where we'd like to add Polars as an option:
<img width="247" alt="image" src="https://github.com/user-attachments/assets/53868c96-7e70-4489-a785-2e4287a69f52">
| Add Polars loading code: Hello from the Polars team! We've recently added support for scanning Hugging Face datasets, and to make it easier for users we're hoping that Polars code snippets could be added under the "Use this dataset" section on the dataset viewer webpage, next to the other libraries.
Specifically, here's where we'd like to add Polars as an option:
<img width="247" alt="image" src="https://github.com/user-attachments/assets/53868c96-7e70-4489-a785-2e4287a69f52">
| closed | 2024-07-24T14:34:14Z | 2024-07-26T14:07:14Z | 2024-07-26T13:42:24Z | nameexhaustion |
2,425,473,680 | Modalities not detected for some datasets using the Webdatasets format | I have found 2 examples of the modality detection code failing to recognize modalities in text and image datasets using the Webdataset format:
* https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions
* https://huggingface.co/datasets/CaptionEmporium/midjourney-niji-1m-llavanext
I'm not sure where in the modality detection code that things are failing: https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/dataset/modalities.py | Modalities not detected for some datasets using the Webdatasets format: I have found 2 examples of the modality detection code failing to recognize modalities in text and image datasets using the Webdataset format:
* https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions
* https://huggingface.co/datasets/CaptionEmporium/midjourney-niji-1m-llavanext
I'm not sure where in the modality detection code that things are failing: https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/dataset/modalities.py | open | 2024-07-23T15:15:38Z | 2024-08-23T15:04:07Z | null | ProGamerGov |
2,425,350,169 | move the docs page | This change should move https://huggingface.co/docs/datasets-server to https://huggingface.co/docs/dataset-viewer.
cc @mishig25: we also need to redirect https://huggingface.co/docs/datasets-server to https://huggingface.co/docs/dataset-viewer. Where do we do it? | move the docs page: This change should move https://huggingface.co/docs/datasets-server to https://huggingface.co/docs/dataset-viewer.
cc @mishig25: we also need to redirect https://huggingface.co/docs/datasets-server to https://huggingface.co/docs/dataset-viewer. Where do we do it? | closed | 2024-07-23T14:22:25Z | 2024-07-23T15:49:18Z | 2024-07-23T15:49:16Z | severo |
2,425,160,398 | Compute leaks between splits? | See https://huggingface.co/blog/lbourdois/lle
Also: should we find the duplicate rows? | Compute leaks between splits?: See https://huggingface.co/blog/lbourdois/lle
Also: should we find the duplicate rows? | open | 2024-07-23T13:00:39Z | 2024-07-23T15:19:13Z | null | severo |
2,422,729,998 | replace configuration with subset where appropriate | see also https://github.com/huggingface/hub-docs/pull/1347. | replace configuration with subset where appropriate: see also https://github.com/huggingface/hub-docs/pull/1347. | closed | 2024-07-22T12:16:42Z | 2024-07-22T14:11:36Z | 2024-07-22T14:11:34Z | severo |
2,422,651,235 | Unescaped config names with special characters in the URL | When playing with mlcroissant, we observed the following issue:
[bigcode/commitpackft](https://huggingface.co/datasets/bigcode/commitpackft) has both the configs `c` and `c#`. When going to https://huggingface.co/api/datasets/bigcode/commitpackft/parquet/c#/train/0.parquet, it lists https://huggingface.co/api/datasets/bigcode/commitpackft/parquet/c/train/0.parquet (instead of https://huggingface.co/api/datasets/bigcode/commitpackft/parquet/c%23/train/0.parquet).
Should dataset names / config names be escaped in the URLs?
cc @severo @lhoestq | Unescaped config names with special characters in the URL: When playing with mlcroissant, we observed the following issue:
[bigcode/commitpackft](https://huggingface.co/datasets/bigcode/commitpackft) has both the configs `c` and `c#`. When going to https://huggingface.co/api/datasets/bigcode/commitpackft/parquet/c#/train/0.parquet, it lists https://huggingface.co/api/datasets/bigcode/commitpackft/parquet/c/train/0.parquet (instead of https://huggingface.co/api/datasets/bigcode/commitpackft/parquet/c%23/train/0.parquet).
Should dataset names / config names be escaped in the URLs?
cc @severo @lhoestq | open | 2024-07-22T11:37:09Z | 2024-07-29T08:30:36Z | null | marcenacp |
2,422,430,142 | Create a special column type when it contains PDF bytes or PDF URL | In that case, we would generate an image (thumbnail of the first page), stored as an asset, to populate /first-rows and /rows and display in the dataset viewer.
asked internally on Slack: https://huggingface.slack.com/archives/C064HCHEJ2H/p1721215883166569 cc @Pleias | Create a special column type when it contains PDF bytes or PDF URL: In that case, we would generate an image (thumbnail of the first page), stored as an asset, to populate /first-rows and /rows and display in the dataset viewer.
asked internally on Slack: https://huggingface.slack.com/archives/C064HCHEJ2H/p1721215883166569 cc @Pleias | open | 2024-07-22T09:45:00Z | 2024-07-22T10:49:42Z | null | severo |
2,418,023,119 | Inquiry about the Frontend Technologies Used in Dataset Viewer | Hi team,
I'm currently researching the Dataset Viewer project and would like to understand more about the frontend technologies used. Specifically, I'm interested in knowing:
1. Which frontend framework is being utilized (e.g., React, Vue, etc.)?
2. Are there any specific libraries or components being used for UI (e.g., Material-UI, Ant Design)?
3. Any other notable frontend tools or technologies that are part of this project?
Your assistance in providing these details would be greatly appreciated. Thank you for your time and effort!
Best regards | Inquiry about the Frontend Technologies Used in Dataset Viewer: Hi team,
I'm currently researching the Dataset Viewer project and would like to understand more about the frontend technologies used. Specifically, I'm interested in knowing:
1. Which frontend framework is being utilized (e.g., React, Vue, etc.)?
2. Are there any specific libraries or components being used for UI (e.g., Material-UI, Ant Design)?
3. Any other notable frontend tools or technologies that are part of this project?
Your assistance in providing these details would be greatly appreciated. Thank you for your time and effort!
Best regards | closed | 2024-07-19T06:06:20Z | 2024-08-19T13:42:45Z | 2024-08-19T13:42:44Z | jacob-rodgers-max |
2,410,308,970 | Update setuptools to 70.3.0 to fix vulnerability | Update setuptools to 70.3.0 to fix vulnerability.
It will close 1 Dependabot alert. | Update setuptools to 70.3.0 to fix vulnerability: Update setuptools to 70.3.0 to fix vulnerability.
It will close 1 Dependabot alert. | closed | 2024-07-16T06:20:29Z | 2024-07-17T09:18:47Z | 2024-07-17T09:18:45Z | albertvillanova |
2,405,926,121 | Count image urls as image modality | close https://github.com/huggingface/dataset-viewer/issues/2970 for images (audio can be taken care of later) | Count image urls as image modality: close https://github.com/huggingface/dataset-viewer/issues/2970 for images (audio can be taken care of later) | closed | 2024-07-12T16:07:30Z | 2024-07-15T16:48:12Z | 2024-07-15T16:48:10Z | lhoestq |
2,405,821,839 | fix text and arrow format | fix "0 dataset in text format" reported at https://x.com/jedmaczan/status/1809280782588158452
(+ detect arrow format as well)
![image](https://github.com/user-attachments/assets/fa328156-7650-4aa2-a828-41d5ab13c37c)
I'll re-launch the job for text datasets | fix text and arrow format: fix "0 dataset in text format" reported at https://x.com/jedmaczan/status/1809280782588158452
(+ detect arrow format as well)
![image](https://github.com/user-attachments/assets/fa328156-7650-4aa2-a828-41d5ab13c37c)
I'll re-launch the job for text datasets | closed | 2024-07-12T15:05:54Z | 2024-07-16T09:32:17Z | 2024-07-12T15:06:45Z | lhoestq |
2,405,408,924 | Include code snippets for other libraries? | For example, in https://github.com/huggingface/huggingface.js/pull/797, we add `distilabel`, `fiftyone` and `argilla` to the list of libraries the Hub knows. However, the aim is only to handle the user-defined tags better, not to show code snippets.
In this issue, I propose to discuss if we should expand the list of dataset libraries for which we show code snippets. For now, we support pandas, HF datasets, webdatasets, mlcroissant and dask.
We already mentioned polars as a potential new lib, I think. Maybe duckdb too? | Include code snippets for other libraries?: For example, in https://github.com/huggingface/huggingface.js/pull/797, we add `distilabel`, `fiftyone` and `argilla` to the list of libraries the Hub knows. However, the aim is only to handle the user-defined tags better, not to show code snippets.
In this issue, I propose to discuss if we should expand the list of dataset libraries for which we show code snippets. For now, we support pandas, HF datasets, webdatasets, mlcroissant and dask.
We already mentioned polars as a potential new lib, I think. Maybe duckdb too? | open | 2024-07-12T11:57:43Z | 2024-07-12T14:39:59Z | null | severo |
2,403,541,322 | Skip smart update when language tag is updated | null | Skip smart update when language tag is updated: | closed | 2024-07-11T15:56:59Z | 2024-07-11T16:11:39Z | 2024-07-11T16:11:37Z | AndreaFrancis |
2,399,927,837 | Update zipp to 3.19.2 to fix vulnerability | Update zipp to 3.19.2 to fix vulnerability.
It will close 1 Dependabot alert. | Update zipp to 3.19.2 to fix vulnerability: Update zipp to 3.19.2 to fix vulnerability.
It will close 1 Dependabot alert. | closed | 2024-07-10T07:04:00Z | 2024-07-10T07:06:44Z | 2024-07-10T07:06:42Z | albertvillanova |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 11
Size of downloaded dataset files:
1.72 MB
Size of the auto-converted Parquet files:
1.72 MB
Number of rows:
3,066