id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
2,391,405,227
Fix dataset name when decreasing metrics
null
Fix dataset name when decreasing metrics:
closed
2024-07-04T19:58:29Z
2024-07-08T12:04:37Z
2024-07-04T22:40:16Z
AndreaFrancis
2,391,185,983
[Modalities] Account for image URLs dataset for Image modality
right now datasets like https://huggingface.co/datasets/CaptionEmporium/coyo-hd-11m-llavanext are missing the Image modality even though they have image URLs
[Modalities] Account for image URLs dataset for Image modality: right now datasets like https://huggingface.co/datasets/CaptionEmporium/coyo-hd-11m-llavanext are missing the Image modality even though they have image URLs
closed
2024-07-04T16:28:28Z
2024-07-15T16:48:11Z
2024-07-15T16:48:10Z
lhoestq
2,390,869,164
Add threshold to modalities from filetypes
Fix modalities false positives for - https://huggingface.co/datasets/chenxx1/jia - https://huggingface.co/datasets/proj-persona/PersonaHub - https://huggingface.co/datasets/BAAI/Infinity-Instruct - https://huggingface.co/datasets/m-a-p/COIG-CQIA - https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered I added two thresholds to get less false positives: - one general threshold to ignore file types that are <10% of the files - one additional threshold specific to images in presence of csv/json/parquet, since images are often used in README as figures cc @severo This should take care of most false positives, and we can refine later if needed
Add threshold to modalities from filetypes: Fix modalities false positives for - https://huggingface.co/datasets/chenxx1/jia - https://huggingface.co/datasets/proj-persona/PersonaHub - https://huggingface.co/datasets/BAAI/Infinity-Instruct - https://huggingface.co/datasets/m-a-p/COIG-CQIA - https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered I added two thresholds to get less false positives: - one general threshold to ignore file types that are <10% of the files - one additional threshold specific to images in presence of csv/json/parquet, since images are often used in README as figures cc @severo This should take care of most false positives, and we can refine later if needed
closed
2024-07-04T13:23:59Z
2024-07-04T15:19:47Z
2024-07-04T15:19:45Z
lhoestq
2,389,086,231
WIP: Try to get languages from librarian bot PR for FTS
This PR is still in progress, but it is a suggestion about how to get the language from open PRs from the librarian bot. Please let me know what you think. Pending: tests, refactor. For https://huggingface.co/datasets/Osumansan/data-poison/commit/22432ba97e6c559891bd82ca084496a7f8a6699f.diff , it was able to identify 'ko' language, but since it is not supported by DuckDB, it assigns 'none' as stemmer. cc. @davanstrien
WIP: Try to get languages from librarian bot PR for FTS : This PR is still in progress, but it is a suggestion about how to get the language from open PRs from the librarian bot. Please let me know what you think. Pending: tests, refactor. For https://huggingface.co/datasets/Osumansan/data-poison/commit/22432ba97e6c559891bd82ca084496a7f8a6699f.diff , it was able to identify 'ko' language, but since it is not supported by DuckDB, it assigns 'none' as stemmer. cc. @davanstrien
closed
2024-07-03T17:05:20Z
2024-07-05T12:49:45Z
2024-07-05T12:49:45Z
AndreaFrancis
2,388,763,873
Add duration to cached steps
Will close https://github.com/huggingface/dataset-viewer/issues/2892 As suggested in https://github.com/huggingface/dataset-viewer/pull/2908#pullrequestreview-2126425919, this adds duration field to cached responses. Duration is computed using `started_at` field of a corresponding job. Note that this PR adds new fields to major libcommon dtos `JobInfo` and `JobResult`.
Add duration to cached steps: Will close https://github.com/huggingface/dataset-viewer/issues/2892 As suggested in https://github.com/huggingface/dataset-viewer/pull/2908#pullrequestreview-2126425919, this adds duration field to cached responses. Duration is computed using `started_at` field of a corresponding job. Note that this PR adds new fields to major libcommon dtos `JobInfo` and `JobResult`.
closed
2024-07-03T14:17:47Z
2024-07-09T13:06:37Z
2024-07-09T13:06:35Z
polinaeterna
2,388,514,107
Use placeholder revision in urls in cached responses
Previously, we were using the dataset revision in the URLs of image/audio files of cached responses of /first-rows. However when a dataset gets its README updated, we update the `dataset_git_revision` of the cache entries and the location of the image/audio files on S3 but we don't modify the revision in the URLs in the cached response. This resulted in the Viewer not showing the images after modifying the readme of a dataset. <img width="1020" alt="image" src="https://github.com/huggingface/dataset-viewer/assets/42851186/6242d9d6-bc2e-42b1-9edd-9f01173b6cd8"> I fixed that for future datasets by not using the revision in the URLs anymore and use a placeholder that is replaced by `dataset_git_revision` when the cached response is accessed ## Implementation details I modified the URL Signer logic to also insert the revision in the URL and renamed it to a URL Preparator. It takes care of inserting the revision and signing the URLs. close https://github.com/huggingface/dataset-viewer/issues/2965
Use placeholder revision in urls in cached responses: Previously, we were using the dataset revision in the URLs of image/audio files of cached responses of /first-rows. However when a dataset gets its README updated, we update the `dataset_git_revision` of the cache entries and the location of the image/audio files on S3 but we don't modify the revision in the URLs in the cached response. This resulted in the Viewer not showing the images after modifying the readme of a dataset. <img width="1020" alt="image" src="https://github.com/huggingface/dataset-viewer/assets/42851186/6242d9d6-bc2e-42b1-9edd-9f01173b6cd8"> I fixed that for future datasets by not using the revision in the URLs anymore and use a placeholder that is replaced by `dataset_git_revision` when the cached response is accessed ## Implementation details I modified the URL Signer logic to also insert the revision in the URL and renamed it to a URL Preparator. It takes care of inserting the revision and signing the URLs. close https://github.com/huggingface/dataset-viewer/issues/2965
closed
2024-07-03T12:33:20Z
2024-07-15T17:27:48Z
2024-07-15T17:27:46Z
lhoestq
2,388,513,656
Viewer doesn't show images properly after a smart update
we move the images on s3 in case of modified readme, but we don't update the location of the images in the first-rows responses
Viewer doesn't show images properly after a smart update: we move the images on s3 in case of modified readme, but we don't update the location of the images in the first-rows responses
closed
2024-07-03T12:33:07Z
2024-07-15T17:27:47Z
2024-07-15T17:27:47Z
lhoestq
2,385,415,052
Viewer shows outdated cache after renaming a repo and creating a new one with the old name
Reported by @lewtun (internal link: https://huggingface.slack.com/archives/C02EMARJ65P/p1719818961944059): > If I rename a dataset via the UI from D -> D' and then create a new dataset with the same name D, I seem to get a copy instead of an empty dataset > Indeed it was the dataset viewer showing a cached result - the git history is clean and there's no files in the new dataset repo
Viewer shows outdated cache after renaming a repo and creating a new one with the old name: Reported by @lewtun (internal link: https://huggingface.slack.com/archives/C02EMARJ65P/p1719818961944059): > If I rename a dataset via the UI from D -> D' and then create a new dataset with the same name D, I seem to get a copy instead of an empty dataset > Indeed it was the dataset viewer showing a cached result - the git history is clean and there's no files in the new dataset repo
open
2024-07-02T07:05:53Z
2024-08-23T14:16:39Z
null
albertvillanova
2,384,098,328
Fix ISO 639-1 mapping for stemming
null
Fix ISO 639-1 mapping for stemming:
closed
2024-07-01T15:08:12Z
2024-07-01T15:33:48Z
2024-07-01T15:33:46Z
AndreaFrancis
2,378,565,820
Removing has_fts field from split-duckdb-index
Context: https://github.com/huggingface/dataset-viewer/pull/2928#discussion_r1652733919 I'm removing the `has_fts` field from `split-duckdb-index`, given that now, the `stemmer` field will indicate if the split supports the feature. `stemmer`=None means no FTS; otherwise, it does.
Removing has_fts field from split-duckdb-index: Context: https://github.com/huggingface/dataset-viewer/pull/2928#discussion_r1652733919 I'm removing the `has_fts` field from `split-duckdb-index`, given that now, the `stemmer` field will indicate if the split supports the feature. `stemmer`=None means no FTS; otherwise, it does.
closed
2024-06-27T16:08:56Z
2024-07-01T15:58:18Z
2024-07-01T15:58:17Z
AndreaFrancis
2,378,431,229
update test_plan_job_creation_and_termination
...to fix the CI after https://github.com/huggingface/dataset-viewer/pull/2958
update test_plan_job_creation_and_termination: ...to fix the CI after https://github.com/huggingface/dataset-viewer/pull/2958
closed
2024-06-27T15:11:13Z
2024-06-28T08:17:24Z
2024-06-27T15:26:52Z
lhoestq
2,378,383,575
Detect rename in smart update
reported in https://huggingface.co/datasets/crossingminds/shopping-queries-image-dataset/discussions/2
Detect rename in smart update: reported in https://huggingface.co/datasets/crossingminds/shopping-queries-image-dataset/discussions/2
closed
2024-06-27T14:50:49Z
2024-06-27T15:38:41Z
2024-06-27T15:38:39Z
lhoestq
2,378,123,069
add diagram to docs
fixes #2956
add diagram to docs: fixes #2956
closed
2024-06-27T13:09:52Z
2024-06-27T15:22:21Z
2024-06-27T15:22:18Z
severo
2,377,961,711
Remove blocked only job types
fixes #2957
Remove blocked only job types: fixes #2957
closed
2024-06-27T11:59:46Z
2024-06-27T13:17:06Z
2024-06-27T13:17:04Z
severo
2,377,929,968
Remove logic for `WORKER_JOB_TYPES_BLOCKED` and `WORKER_JOB_TYPES_ONLY`
they are not used anymore (empty lists).
Remove logic for `WORKER_JOB_TYPES_BLOCKED` and `WORKER_JOB_TYPES_ONLY`: they are not used anymore (empty lists).
closed
2024-06-27T11:47:27Z
2024-06-27T13:17:05Z
2024-06-27T13:17:05Z
severo
2,377,862,917
Elaborate a diagram that describes the queues/prioritization logic
it would be useful to discuss issues like https://github.com/huggingface/dataset-viewer/issues/2955
Elaborate a diagram that describes the queues/prioritization logic: it would be useful to discuss issues like https://github.com/huggingface/dataset-viewer/issues/2955
closed
2024-06-27T11:11:22Z
2024-06-27T15:22:19Z
2024-06-27T15:22:19Z
severo
2,377,860,215
prioritize jobs from trendy/important datasets
internal Slack discussion: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1719482620323589?thread_ts=1719418419.785649&cid=C04L6P8KNQ5 > prioritization of datasets should probably be based on some popularity signal like number of likes or traffic to the dataset page in the future
prioritize jobs from trendy/important datasets: internal Slack discussion: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1719482620323589?thread_ts=1719418419.785649&cid=C04L6P8KNQ5 > prioritization of datasets should probably be based on some popularity signal like number of likes or traffic to the dataset page in the future
open
2024-06-27T11:09:55Z
2024-07-30T16:28:58Z
null
severo
2,375,845,547
Smart update on all datasets
I did a bunch of test on https://huggingface.co/datasets/datasets-maintainers/test-smart-update and it works fine: edits to README.md unrelated to the `configs` don't trigger a full recomputation of the viewer :) I also tested with image data and they are correctly handled.
Smart update on all datasets: I did a bunch of test on https://huggingface.co/datasets/datasets-maintainers/test-smart-update and it works fine: edits to README.md unrelated to the `configs` don't trigger a full recomputation of the viewer :) I also tested with image data and they are correctly handled.
closed
2024-06-26T16:57:18Z
2024-06-26T17:00:02Z
2024-06-26T17:00:00Z
lhoestq
2,375,826,127
add /admin/blocked-datasets endpoint
I will use it in https://observablehq.com/@huggingface/datasets-server-jobs-queue to be able to understand better the current jobs.
add /admin/blocked-datasets endpoint: I will use it in https://observablehq.com/@huggingface/datasets-server-jobs-queue to be able to understand better the current jobs.
closed
2024-06-26T16:46:35Z
2024-06-26T19:20:12Z
2024-06-26T19:20:10Z
severo
2,375,558,903
No cache in smart update
...since it was causing ``` smart_update_dataset failed with PermissionError: [Errno 13] Permission denied: '/.cache' ``` I used the HfFileSystem instead to read the README.md files, since it doesn't use caching
No cache in smart update: ...since it was causing ``` smart_update_dataset failed with PermissionError: [Errno 13] Permission denied: '/.cache' ``` I used the HfFileSystem instead to read the README.md files, since it doesn't use caching
closed
2024-06-26T14:43:27Z
2024-06-26T16:05:23Z
2024-06-26T16:05:21Z
lhoestq
2,373,152,372
Add estimated_num_rows in openapi
null
Add estimated_num_rows in openapi:
closed
2024-06-25T16:46:42Z
2024-07-25T12:02:34Z
2024-07-25T12:02:33Z
lhoestq
2,373,113,430
add missing migration for estimated_num_rows
null
add missing migration for estimated_num_rows:
closed
2024-06-25T16:25:31Z
2024-06-25T16:28:44Z
2024-06-25T16:28:44Z
lhoestq
2,372,983,582
Ignore blocked datasets in WorkerSize metrics for auto scaling
Fix for https://github.com/huggingface/dataset-viewer/issues/2945
Ignore blocked datasets in WorkerSize metrics for auto scaling: Fix for https://github.com/huggingface/dataset-viewer/issues/2945
closed
2024-06-25T15:26:26Z
2024-06-26T16:23:57Z
2024-06-26T15:15:52Z
AndreaFrancis
2,371,365,933
Exclude blocked datasets from Job metrics
Fix for https://github.com/huggingface/dataset-viewer/issues/2945
Exclude blocked datasets from Job metrics: Fix for https://github.com/huggingface/dataset-viewer/issues/2945
closed
2024-06-25T00:38:56Z
2024-06-25T15:11:57Z
2024-06-25T15:11:56Z
AndreaFrancis
2,371,192,358
update indexes
I think the old ones will remain. I'll remove them manually... These two indexes have been proposed by mongo cloud. The reason is: https://github.com/huggingface/dataset-viewer/pull/2933/files#diff-4c951d0a5e21ef5c719bc392169f41e726461028dfd8e049778fedff37ba38c8R422
update indexes: I think the old ones will remain. I'll remove them manually... These two indexes have been proposed by mongo cloud. The reason is: https://github.com/huggingface/dataset-viewer/pull/2933/files#diff-4c951d0a5e21ef5c719bc392169f41e726461028dfd8e049778fedff37ba38c8R422
closed
2024-06-24T22:08:29Z
2024-06-24T22:11:13Z
2024-06-24T22:11:12Z
severo
2,370,640,346
add cudf to toctree
Follow up to https://github.com/huggingface/dataset-viewer/pull/2941 I realized the docs were not building in the CI: https://github.com/huggingface/dataset-viewer/actions/runs/9648374360/job/26609396615 Apologies for not checking this in my prior PR. ![Screenshot 2024-06-24 at 12 21 45 PM](https://github.com/huggingface/dataset-viewer/assets/17162724/9d7278b6-d00f-4795-93dc-259a08fb856b)
add cudf to toctree: Follow up to https://github.com/huggingface/dataset-viewer/pull/2941 I realized the docs were not building in the CI: https://github.com/huggingface/dataset-viewer/actions/runs/9648374360/job/26609396615 Apologies for not checking this in my prior PR. ![Screenshot 2024-06-24 at 12 21 45 PM](https://github.com/huggingface/dataset-viewer/assets/17162724/9d7278b6-d00f-4795-93dc-259a08fb856b)
closed
2024-06-24T16:22:08Z
2024-06-24T18:33:22Z
2024-06-24T18:30:42Z
raybellwaves
2,370,568,664
Add "blocked/not blocked" in job count metrics
Now that we block datasets, the job count metrics are a bit misleading, because they still include the jobs of blocked datasets. We need to be able to filter them out, because they are outside of the queue during the blockage.
Add "blocked/not blocked" in job count metrics: Now that we block datasets, the job count metrics are a bit misleading, because they still include the jobs of blocked datasets. We need to be able to filter them out, because they are outside of the queue during the blockage.
open
2024-06-24T15:44:48Z
2024-07-30T15:48:54Z
null
severo
2,370,259,585
Remove old get_df code
null
Remove old get_df code:
closed
2024-06-24T13:27:36Z
2024-06-24T14:38:04Z
2024-06-24T14:38:03Z
AndreaFrancis
2,370,217,564
Support `Sequence()` features in Croissant crumbs.
WIP: still checking that it works as intended on mlcroissant side.
Support `Sequence()` features in Croissant crumbs.: WIP: still checking that it works as intended on mlcroissant side.
closed
2024-06-24T13:08:51Z
2024-07-22T11:01:13Z
2024-07-22T11:00:38Z
marcenacp
2,369,842,186
Increase blockage duration
blocked for 1h -> 6h based on the last 6h -> 1h The effect should be to block more datasets, more quickly, and for a longer time. Currently, all the resources are still dedicated to datasets that are updated every xxx minutes.
Increase blockage duration: blocked for 1h -> 6h based on the last 6h -> 1h The effect should be to block more datasets, more quickly, and for a longer time. Currently, all the resources are still dedicated to datasets that are updated every xxx minutes.
closed
2024-06-24T10:20:46Z
2024-06-24T10:46:13Z
2024-06-24T10:46:12Z
severo
2,369,033,982
add cudf example
Firstly, thanks a lot for this project and hosting so many datasets! This PR add an example for how to read in data using cudf. This can be useful if you have access to a GPU and want to use the GPU to accelerate any ETL. The code works and can be testing in this google colab notebook: https://colab.research.google.com/drive/1KrhARlJMPoaIpHqE9X2fMQFcYTLffrem?usp=sharing
add cudf example: Firstly, thanks a lot for this project and hosting so many datasets! This PR add an example for how to read in data using cudf. This can be useful if you have access to a GPU and want to use the GPU to accelerate any ETL. The code works and can be testing in this google colab notebook: https://colab.research.google.com/drive/1KrhARlJMPoaIpHqE9X2fMQFcYTLffrem?usp=sharing
closed
2024-06-24T01:54:48Z
2024-06-24T15:45:57Z
2024-06-24T15:45:57Z
raybellwaves
2,366,797,233
Add num_rows estimate in hub_cache
Added estimated_num_rows to config-size, dataset-size, and I also updated `hub_cache` to use estimated_num_rows if possible as `num_rows` (this way no need to modify anything in moon) TODO - [x] mongodb migration of size jobs - [x] update tests - [x] support mix of partial and exact num_rows in config-size and dataset-size - [x] revert sylvain's suggestion to add "num_rows_source" that is actually not needed
Add num_rows estimate in hub_cache: Added estimated_num_rows to config-size, dataset-size, and I also updated `hub_cache` to use estimated_num_rows if possible as `num_rows` (this way no need to modify anything in moon) TODO - [x] mongodb migration of size jobs - [x] update tests - [x] support mix of partial and exact num_rows in config-size and dataset-size - [x] revert sylvain's suggestion to add "num_rows_source" that is actually not needed
closed
2024-06-21T15:41:31Z
2024-06-25T16:18:28Z
2024-06-25T16:12:54Z
lhoestq
2,366,734,798
fix flaky test gen_kwargs
null
fix flaky test gen_kwargs:
closed
2024-06-21T15:05:48Z
2024-06-21T15:08:18Z
2024-06-21T15:08:17Z
lhoestq
2,366,647,717
Do not keep DataFrames in memory in orchestrator classes
Do not keep unnecessary DataFrames in memory in orchestrator classes and instead forward them for use only in class instantiation or reset them after being forwarded. This PR is related to: - #2921
Do not keep DataFrames in memory in orchestrator classes: Do not keep unnecessary DataFrames in memory in orchestrator classes and instead forward them for use only in class instantiation or reset them after being forwarded. This PR is related to: - #2921
closed
2024-06-21T14:18:17Z
2024-07-29T15:05:03Z
2024-07-29T15:05:02Z
albertvillanova
2,366,247,400
Enable estimate info (size) on all datasets
TODO before merging: - [x] merge https://github.com/huggingface/dataset-viewer/pull/2932 - [x] test json in prod (allenai/c4 en.noblocklist: correct relative error of only 0.02%) - [x] test webdataset in prod (datasets-maintainers/small-publaynet-wds-10x: correct with relative error of only 0.07%)
Enable estimate info (size) on all datasets: TODO before merging: - [x] merge https://github.com/huggingface/dataset-viewer/pull/2932 - [x] test json in prod (allenai/c4 en.noblocklist: correct relative error of only 0.02%) - [x] test webdataset in prod (datasets-maintainers/small-publaynet-wds-10x: correct with relative error of only 0.07%)
closed
2024-06-21T10:33:27Z
2024-06-24T13:14:51Z
2024-06-21T13:25:22Z
lhoestq
2,366,097,076
Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability
Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability. This PR will close 14 Dependabot alerts.
Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability: Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability. This PR will close 14 Dependabot alerts.
closed
2024-06-21T09:14:26Z
2024-06-25T11:50:49Z
2024-06-25T11:50:49Z
albertvillanova
2,365,937,805
divide the rate-limit budget by 5
null
divide the rate-limit budget by 5:
closed
2024-06-21T07:46:31Z
2024-06-21T07:51:35Z
2024-06-21T07:51:34Z
severo
2,365,913,782
Update scikit-learn to 1.5.0 to fix vulnerability
Update scikit-learn to 1.5.0 to fix vulnerability. This will close 12 Dependabot alerts.
Update scikit-learn to 1.5.0 to fix vulnerability: Update scikit-learn to 1.5.0 to fix vulnerability. This will close 12 Dependabot alerts.
closed
2024-06-21T07:31:32Z
2024-06-21T08:21:29Z
2024-06-21T08:21:28Z
albertvillanova
2,364,810,195
create datasetBlockages collection + block datasets
We apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627). Follows #2931
create datasetBlockages collection + block datasets: We apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627). Follows #2931
closed
2024-06-20T16:12:52Z
2024-06-20T20:48:38Z
2024-06-20T20:48:37Z
severo
2,364,670,609
Fix estimate info for zip datasets
I simply had to track metadata reads only once to fix the estimator. (otherwise every file opened in a zip archive triggers an additional read of the metadata with the central directory of the zip file that prevents the estimator from converging) ex: locally and on only 100MB of parquet conversion (prod is 5GB), it estimates 47871 examples on https://huggingface.co/datasets/datasets-maintainers/test_many_zip_with_images_MSCOCO_zip (true value is exactly 50k) Unrelared, but I noticed that parquet conversion of zip files is super slooooooowww, we'll have to improve that at one point because it can surely take more than 40min to run. When tested in prod, an error happens before that though (JobManagerCrashedError) which suggests a termination by kubernetes.
Fix estimate info for zip datasets: I simply had to track metadata reads only once to fix the estimator. (otherwise every file opened in a zip archive triggers an additional read of the metadata with the central directory of the zip file that prevents the estimator from converging) ex: locally and on only 100MB of parquet conversion (prod is 5GB), it estimates 47871 examples on https://huggingface.co/datasets/datasets-maintainers/test_many_zip_with_images_MSCOCO_zip (true value is exactly 50k) Unrelared, but I noticed that parquet conversion of zip files is super slooooooowww, we'll have to improve that at one point because it can surely take more than 40min to run. When tested in prod, an error happens before that though (JobManagerCrashedError) which suggests a termination by kubernetes.
closed
2024-06-20T15:00:39Z
2024-06-21T13:13:05Z
2024-06-21T13:13:05Z
lhoestq
2,364,486,343
Create pastJobs collection
It will be used to apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627).
Create pastJobs collection: It will be used to apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627).
closed
2024-06-20T13:41:00Z
2024-06-20T20:24:49Z
2024-06-20T20:24:48Z
severo
2,364,286,392
[refactoring] split queue.py in 3 modules
To prepare https://github.com/huggingface/dataset-viewer/issues/2279
[refactoring] split queue.py in 3 modules: To prepare https://github.com/huggingface/dataset-viewer/issues/2279
closed
2024-06-20T12:11:09Z
2024-06-20T12:39:17Z
2024-06-20T12:39:16Z
severo
2,363,928,494
Use current priority for children jobs
When we change the priority of a job manually after it has started, we want the children jobs to use the same priority
Use current priority for children jobs: When we change the priority of a job manually after it has started, we want the children jobs to use the same priority
closed
2024-06-20T09:07:05Z
2024-06-20T09:18:09Z
2024-06-20T09:18:08Z
severo
2,363,156,688
FTS: Add specific stemmer for monolingual datasets
null
FTS: Add specific stemmer for monolingual datasets:
closed
2024-06-19T21:32:29Z
2024-06-26T14:14:54Z
2024-06-26T14:14:52Z
AndreaFrancis
2,363,092,971
Separate expected errors from unexpected ones in Grafana
We should never have: `UnexpectedError`, `PreviousStepFormatError` Also: some error should be temporary: `PreviousStepNotReady`, etc. Instead of one chart with the three kinds of errors, we should have: - one with the expected errors - one with the transitory errors - one with the unexpected errors By the way, I think we could create intermediary classes (`ExpectedError`, `UnexpectedError`, `TransitoryError`), so that it's clear which error pertains to which category
Separate expected errors from unexpected ones in Grafana: We should never have: `UnexpectedError`, `PreviousStepFormatError` Also: some error should be temporary: `PreviousStepNotReady`, etc. Instead of one chart with the three kinds of errors, we should have: - one with the expected errors - one with the transitory errors - one with the unexpected errors By the way, I think we could create intermediary classes (`ExpectedError`, `UnexpectedError`, `TransitoryError`), so that it's clear which error pertains to which category
open
2024-06-19T20:40:19Z
2024-06-19T20:40:28Z
null
severo
2,363,079,584
only raise error in config-is-valid if format is bad
Should remove the remaining PreviousStepFormatError: https://github.com/huggingface/dataset-viewer/issues/2433#issuecomment-2179014134
only raise error in config-is-valid if format is bad: Should remove the remaining PreviousStepFormatError: https://github.com/huggingface/dataset-viewer/issues/2433#issuecomment-2179014134
closed
2024-06-19T20:27:44Z
2024-06-20T09:17:53Z
2024-06-20T09:17:53Z
severo
2,362,863,359
Reorder and hide columns within dataset viewer
# Problem When doing some basic vibe checks for datasets, I realized that the order in which columns are shown might not always be useful for viewing and exploring the data. I might want to quickly show `chosen_model` next to `chosen_response` and `chosen_avg_rating` and continue exploring based on that. Also, this representation might different based on personal preference and search session. # Related I noticed an earlier discussion here https://github.com/huggingface/dataset-viewer/issues/2639 (thanks @davanstrien) but it does not fully align with that ideation. # Implementation I see it more as an on-the-fly UI thing than a configurable Yaml thing (proposed in #2639), similar to re-ordering a tab in a browser. Based on this, I can also see an option to potentially hide useless columns. # Example Based on [UltraFeedback](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences/viewer/default/train) <img width="1465" alt="image" src="https://github.com/huggingface/dataset-viewer/assets/25269220/aac80172-969f-45b6-8b9a-513ace626320">
Reorder and hide columns within dataset viewer : # Problem When doing some basic vibe checks for datasets, I realized that the order in which columns are shown might not always be useful for viewing and exploring the data. I might want to quickly show `chosen_model` next to `chosen_response` and `chosen_avg_rating` and continue exploring based on that. Also, this representation might different based on personal preference and search session. # Related I noticed an earlier discussion here https://github.com/huggingface/dataset-viewer/issues/2639 (thanks @davanstrien) but it does not fully align with that ideation. # Implementation I see it more as an on-the-fly UI thing than a configurable Yaml thing (proposed in #2639), similar to re-ordering a tab in a browser. Based on this, I can also see an option to potentially hide useless columns. # Example Based on [UltraFeedback](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences/viewer/default/train) <img width="1465" alt="image" src="https://github.com/huggingface/dataset-viewer/assets/25269220/aac80172-969f-45b6-8b9a-513ace626320">
open
2024-06-19T17:32:33Z
2024-06-19T20:43:39Z
null
davidberenstein1957
2,362,699,032
Delete canonical datasets
The cache still contains entries for canonical datasets that have been moved to their own namespace (see https://github.com/huggingface/dataset-viewer/issues/2478#issuecomment-2179018465 for a list). We must delete the cache entries (+ assets/cached assets/jobs)
Delete canonical datasets: The cache still contains entries for canonical datasets that have been moved to their own namespace (see https://github.com/huggingface/dataset-viewer/issues/2478#issuecomment-2179018465 for a list). We must delete the cache entries (+ assets/cached assets/jobs)
closed
2024-06-19T15:51:11Z
2024-07-30T16:29:59Z
2024-07-30T16:29:59Z
severo
2,362,660,660
Fix estimate info allow_list
null
Fix estimate info allow_list:
closed
2024-06-19T15:29:58Z
2024-06-19T15:30:07Z
2024-06-19T15:30:06Z
lhoestq
2,360,869,190
admin-ui: Do not mark gated datasets as error
In the admin-ui, some datasets have all the features working (is-valid) but are shown as errors. This is due to gated datasets; I just added another sign to identify those and let us know that the issue is unrelated to jobs not being correctly processed. Another question is if we should consider this dataset as part of the coverage metric since we need to know the value. Or maybe there is another way to automatically subscribe to the admin-UI user token for those datasets?
admin-ui: Do not mark gated datasets as error: In the admin-ui, some datasets have all the features working (is-valid) but are shown as errors. This is due to gated datasets; I just added another sign to identify those and let us know that the issue is unrelated to jobs not being correctly processed. Another question is if we should consider this dataset as part of the coverage metric since we need to know the value. Or maybe there is another way to automatically subscribe to the admin-UI user token for those datasets?
closed
2024-06-18T22:48:21Z
2024-06-19T14:43:09Z
2024-06-19T14:43:08Z
AndreaFrancis
2,360,258,601
Do not keep DataFrames in memory in State classes
Do not keep unnecessary DataFrames in memory in State classes and instead forward them for use only in class instantiation. This PR reduces memory use by avoiding keeping unnecessary DataFrames in all State classes. This PR supersedes (neither copies nor views are longer necessary): - #2903
Do not keep DataFrames in memory in State classes: Do not keep unnecessary DataFrames in memory in State classes and instead forward them for use only in class instantiation. This PR reduces memory use by avoiding keeping unnecessary DataFrames in all State classes. This PR supersedes (neither copies nor views are longer necessary): - #2903
closed
2024-06-18T16:27:41Z
2024-06-19T05:46:09Z
2024-06-19T05:46:08Z
albertvillanova
2,359,917,370
order the steps alphabetically
fixes #2917
order the steps alphabetically: fixes #2917
closed
2024-06-18T13:48:37Z
2024-06-18T13:48:53Z
2024-06-18T13:48:52Z
severo
2,359,907,706
Do not propagate error for is valid and hub cache
We always want to have a "status", even if some of the previous steps are not available.
Do not propagate error for is valid and hub cache: We always want to have a "status", even if some of the previous steps are not available.
closed
2024-06-18T13:44:32Z
2024-06-19T15:09:01Z
2024-06-19T15:09:00Z
severo
2,359,600,864
The "dataset-hub-cache" and "dataset-is-valid" steps should always return a value
For example, we detect that the dataset `nkp37/OpenVid-1M` has Image and Video modalities (steps `dataset-filetypes` and `dataset-modalities`), but because the datasets library fails to list the configs, the following steps also return an error.
The "dataset-hub-cache" and "dataset-is-valid" steps should always return a value: For example, we detect that the dataset `nkp37/OpenVid-1M` has Image and Video modalities (steps `dataset-filetypes` and `dataset-modalities`), but because the datasets library fails to list the configs, the following steps also return an error.
closed
2024-06-18T11:13:41Z
2024-06-19T15:09:01Z
2024-06-19T15:09:01Z
severo
2,359,593,459
admin UI: automatically fill the steps list
Currently, the step `dataset-filetypes` is absent from the job types list <img width="521" alt="Capture d’écran 2024-06-18 à 13 09 01" src="https://github.com/huggingface/dataset-viewer/assets/1676121/2fc31de2-63d9-46d2-81f8-cc79dc78e868"> The list should be created automatically from the processing graph + sorted alphabetically
admin UI: automatically fill the steps list: Currently, the step `dataset-filetypes` is absent from the job types list <img width="521" alt="Capture d’écran 2024-06-18 à 13 09 01" src="https://github.com/huggingface/dataset-viewer/assets/1676121/2fc31de2-63d9-46d2-81f8-cc79dc78e868"> The list should be created automatically from the processing graph + sorted alphabetically
closed
2024-06-18T11:09:31Z
2024-06-18T13:49:31Z
2024-06-18T13:48:53Z
severo
2,359,570,130
[modality detection] One image in the repo -> Image modality
See https://huggingface.co/datasets/BAAI/Infinity-Instruct/tree/main <img width="465" alt="Capture d’écran 2024-06-18 à 12 55 35" src="https://github.com/huggingface/dataset-viewer/assets/1676121/c00b63a2-f1bd-4957-b56e-425c0a99f149"> <img width="469" alt="Capture d’écran 2024-06-18 à 12 55 49" src="https://github.com/huggingface/dataset-viewer/assets/1676121/4397834e-893f-4c28-b212-948504430c40"> It contains 5 diagram images unrelated to the dataset itself, which is a JSONL file. But the presence of these images triggers the Image modality. <img width="614" alt="Capture d’écran 2024-06-18 à 12 56 55" src="https://github.com/huggingface/dataset-viewer/assets/1676121/312181d9-334f-45af-a4e9-c5dc9371fd64">
[modality detection] One image in the repo -> Image modality: See https://huggingface.co/datasets/BAAI/Infinity-Instruct/tree/main <img width="465" alt="Capture d’écran 2024-06-18 à 12 55 35" src="https://github.com/huggingface/dataset-viewer/assets/1676121/c00b63a2-f1bd-4957-b56e-425c0a99f149"> <img width="469" alt="Capture d’écran 2024-06-18 à 12 55 49" src="https://github.com/huggingface/dataset-viewer/assets/1676121/4397834e-893f-4c28-b212-948504430c40"> It contains 5 diagram images unrelated to the dataset itself, which is a JSONL file. But the presence of these images triggers the Image modality. <img width="614" alt="Capture d’écran 2024-06-18 à 12 56 55" src="https://github.com/huggingface/dataset-viewer/assets/1676121/312181d9-334f-45af-a4e9-c5dc9371fd64">
open
2024-06-18T10:57:14Z
2024-06-18T10:57:21Z
null
severo
2,358,366,100
Bump urllib3 from 2.0.7 to 2.2.2 in /docs in the pip group across 1 directory
Bumps the pip group with 1 update in the /docs directory: [urllib3](https://github.com/urllib3/urllib3). Updates `urllib3` from 2.0.7 to 2.2.2 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p> <blockquote> <h2>2.2.2</h2> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Changes</h2> <ul> <li>Added the <code>Proxy-Authorization</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>.</li> <li>Allowed passing negative integers as <code>amt</code> to read methods of <code>http.client.HTTPResponse</code> as an alternative to <code>None</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3122">#3122</a>)</li> <li>Fixed return types representing copying actions to use <code>typing.Self</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3363">#3363</a>)</li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/urllib3/urllib3/compare/2.2.1...2.2.2">https://github.com/urllib3/urllib3/compare/2.2.1...2.2.2</a></p> <h2>2.2.1</h2> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Changes</h2> <ul> <li>Fixed issue where <code>InsecureRequestWarning</code> was emitted for HTTPS connections when using Emscripten. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3331">#3331</a>)</li> <li>Fixed <code>HTTPConnectionPool.urlopen</code> to stop automatically casting non-proxy headers to <code>HTTPHeaderDict</code>. This change was premature as it did not apply to proxy headers and <code>HTTPHeaderDict</code> does not handle byte header values correctly yet. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3343">#3343</a>)</li> <li>Changed <code>ProtocolError</code> to <code>InvalidChunkLength</code> when response terminates before the chunk length is sent. (<a href="https://redirect.github.com/urllib3/urllib3/issues/2860">#2860</a>)</li> <li>Changed <code>ProtocolError</code> to be more verbose on incomplete reads with excess content. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3261">#3261</a>)</li> </ul> <h2>2.2.0</h2> <h2>🖥️ urllib3 now works in the browser</h2> <p>:tada: <strong>This release adds experimental support for <a href="https://urllib3.readthedocs.io/en/stable/reference/contrib/emscripten.html">using urllib3 in the browser with Pyodide</a>!</strong> :tada:</p> <p>Thanks to Joe Marshall (<a href="https://github.com/joemarshall"><code>@​joemarshall</code></a>) for contributing this feature. This change was possible thanks to work done in urllib3 v2.0 to detach our API from <code>http.client</code>. Please report all bugs to the <a href="https://github.com/urllib3/urllib3/issues">urllib3 issue tracker</a>.</p> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Changes</h2> <ul> <li>Added support for <a href="https://urllib3.readthedocs.io/en/latest/reference/contrib/emscripten.html">Emscripten and Pyodide</a>, including streaming support in cross-origin isolated browser environments where threading is enabled. (<a href="https://redirect.github.com/urllib3/urllib3/issues/2951">#2951</a>)</li> <li>Added support for <code>HTTPResponse.read1()</code> method. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3186">#3186</a>)</li> <li>Added rudimentary support for HTTP/2. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3284">#3284</a>)</li> <li>Fixed issue where requests against urls with trailing dots were failing due to SSL errors when using proxy. (<a href="https://redirect.github.com/urllib3/urllib3/issues/2244">#2244</a>)</li> <li>Fixed <code>HTTPConnection.proxy_is_verified</code> and <code>HTTPSConnection.proxy_is_verified</code> to be always set to a boolean after connecting to a proxy. It could be <code>None</code> in some cases previously. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3130">#3130</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p> <blockquote> <h1>2.2.2 (2024-06-17)</h1> <ul> <li>Added the <code>Proxy-Authorization</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>.</li> <li>Allowed passing negative integers as <code>amt</code> to read methods of <code>http.client.HTTPResponse</code> as an alternative to <code>None</code>. (<code>[#3122](https://github.com/urllib3/urllib3/issues/3122) &lt;https://github.com/urllib3/urllib3/issues/3122&gt;</code>__)</li> <li>Fixed return types representing copying actions to use <code>typing.Self</code>. (<code>[#3363](https://github.com/urllib3/urllib3/issues/3363) &lt;https://github.com/urllib3/urllib3/issues/3363&gt;</code>__)</li> </ul> <h1>2.2.1 (2024-02-16)</h1> <ul> <li>Fixed issue where <code>InsecureRequestWarning</code> was emitted for HTTPS connections when using Emscripten. (<code>[#3331](https://github.com/urllib3/urllib3/issues/3331) &lt;https://github.com/urllib3/urllib3/issues/3331&gt;</code>__)</li> <li>Fixed <code>HTTPConnectionPool.urlopen</code> to stop automatically casting non-proxy headers to <code>HTTPHeaderDict</code>. This change was premature as it did not apply to proxy headers and <code>HTTPHeaderDict</code> does not handle byte header values correctly yet. (<code>[#3343](https://github.com/urllib3/urllib3/issues/3343) &lt;https://github.com/urllib3/urllib3/issues/3343&gt;</code>__)</li> <li>Changed <code>InvalidChunkLength</code> to <code>ProtocolError</code> when response terminates before the chunk length is sent. (<code>[#2860](https://github.com/urllib3/urllib3/issues/2860) &lt;https://github.com/urllib3/urllib3/issues/2860&gt;</code>__)</li> <li>Changed <code>ProtocolError</code> to be more verbose on incomplete reads with excess content. (<code>[#3261](https://github.com/urllib3/urllib3/issues/3261) &lt;https://github.com/urllib3/urllib3/issues/3261&gt;</code>__)</li> </ul> <h1>2.2.0 (2024-01-30)</h1> <ul> <li>Added support for <code>Emscripten and Pyodide &lt;https://urllib3.readthedocs.io/en/latest/reference/contrib/emscripten.html&gt;</code><strong>, including streaming support in cross-origin isolated browser environments where threading is enabled. (<code>[#2951](https://github.com/urllib3/urllib3/issues/2951) &lt;https://github.com/urllib3/urllib3/issues/2951&gt;</code></strong>)</li> <li>Added support for <code>HTTPResponse.read1()</code> method. (<code>[#3186](https://github.com/urllib3/urllib3/issues/3186) &lt;https://github.com/urllib3/urllib3/issues/3186&gt;</code>__)</li> <li>Added rudimentary support for HTTP/2. (<code>[#3284](https://github.com/urllib3/urllib3/issues/3284) &lt;https://github.com/urllib3/urllib3/issues/3284&gt;</code>__)</li> <li>Fixed issue where requests against urls with trailing dots were failing due to SSL errors when using proxy. (<code>[#2244](https://github.com/urllib3/urllib3/issues/2244) &lt;https://github.com/urllib3/urllib3/issues/2244&gt;</code>__)</li> <li>Fixed <code>HTTPConnection.proxy_is_verified</code> and <code>HTTPSConnection.proxy_is_verified</code> to be always set to a boolean after connecting to a proxy. It could be <code>None</code> in some cases previously. (<code>[#3130](https://github.com/urllib3/urllib3/issues/3130) &lt;https://github.com/urllib3/urllib3/issues/3130&gt;</code>__)</li> <li>Fixed an issue where <code>headers</code> passed in a request with <code>json=</code> would be mutated (<code>[#3203](https://github.com/urllib3/urllib3/issues/3203) &lt;https://github.com/urllib3/urllib3/issues/3203&gt;</code>__)</li> <li>Fixed <code>HTTPSConnection.is_verified</code> to be set to <code>False</code> when connecting from a HTTPS proxy to an HTTP target. It was set to <code>True</code> previously. (<code>[#3267](https://github.com/urllib3/urllib3/issues/3267) &lt;https://github.com/urllib3/urllib3/issues/3267&gt;</code>__)</li> <li>Fixed handling of new error message from OpenSSL 3.2.0 when configuring an HTTP proxy as HTTPS (<code>[#3268](https://github.com/urllib3/urllib3/issues/3268) &lt;https://github.com/urllib3/urllib3/issues/3268&gt;</code>__)</li> <li>Fixed TLS 1.3 post-handshake auth when the server certificate validation is disabled (<code>[#3325](https://github.com/urllib3/urllib3/issues/3325) &lt;https://github.com/urllib3/urllib3/issues/3325&gt;</code>__)</li> <li>Note for downstream distributors: To run integration tests, you now need to run the tests a second time with the <code>--integration</code> pytest flag. (<code>[#3181](https://github.com/urllib3/urllib3/issues/3181) &lt;https://github.com/urllib3/urllib3/issues/3181&gt;</code>__)</li> </ul> <h1>2.1.0 (2023-11-13)</h1> <ul> <li>Removed support for the deprecated urllib3[secure] extra. (<code>[#2680](https://github.com/urllib3/urllib3/issues/2680) &lt;https://github.com/urllib3/urllib3/issues/2680&gt;</code>__)</li> <li>Removed support for the deprecated SecureTransport TLS implementation. (<code>[#2681](https://github.com/urllib3/urllib3/issues/2681) &lt;https://github.com/urllib3/urllib3/issues/2681&gt;</code>__)</li> <li>Removed support for the end-of-life Python 3.7. (<code>[#3143](https://github.com/urllib3/urllib3/issues/3143) &lt;https://github.com/urllib3/urllib3/issues/3143&gt;</code>__)</li> <li>Allowed loading CA certificates from memory for proxies. (<code>[#3065](https://github.com/urllib3/urllib3/issues/3065) &lt;https://github.com/urllib3/urllib3/issues/3065&gt;</code>__)</li> <li>Fixed decoding Gzip-encoded responses which specified <code>x-gzip</code> content-encoding. (<code>[#3174](https://github.com/urllib3/urllib3/issues/3174) &lt;https://github.com/urllib3/urllib3/issues/3174&gt;</code>__)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/urllib3/urllib3/commit/27e2a5c5a7ab6a517252cc8dcef3ffa6ffb8f61a"><code>27e2a5c</code></a> Release 2.2.2 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3406">#3406</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/accff72ecc2f6cf5a76d9570198a93ac7c90270e"><code>accff72</code></a> Merge pull request from GHSA-34jh-p97f-mpxf</li> <li><a href="https://github.com/urllib3/urllib3/commit/34be4a57e59eb7365bcc37d52e9f8271b5b8d0d3"><code>34be4a5</code></a> Pin CFFI to a new release candidate instead of a Git commit (<a href="https://redirect.github.com/urllib3/urllib3/issues/3398">#3398</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/da410581b6b3df73da976b5ce5eb20a4bd030437"><code>da41058</code></a> Bump browser-actions/setup-chrome from 1.6.0 to 1.7.1 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3399">#3399</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/b07a669bd970d69847801148286b726f0570b625"><code>b07a669</code></a> Bump github/codeql-action from 2.13.4 to 3.25.6 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3396">#3396</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/b8589ec9f8c4da91511e601b632ac06af7e7c10e"><code>b8589ec</code></a> Measure coverage with v4 of artifact actions (<a href="https://redirect.github.com/urllib3/urllib3/issues/3394">#3394</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/f3bdc5585111429e22c81b5fb26c3ec164d98b81"><code>f3bdc55</code></a> Allow triggering CI manually (<a href="https://redirect.github.com/urllib3/urllib3/issues/3391">#3391</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/52392654b30183129cf3ec06010306f517d9c146"><code>5239265</code></a> Fix HTTP version in debug log (<a href="https://redirect.github.com/urllib3/urllib3/issues/3316">#3316</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/b34619f94ece0c40e691a5aaf1304953d88089de"><code>b34619f</code></a> Bump actions/checkout to 4.1.4 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3387">#3387</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/9961d14de7c920091d42d42ed76d5d479b80064d"><code>9961d14</code></a> Bump browser-actions/setup-chrome from 1.5.0 to 1.6.0 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3386">#3386</a>)</li> <li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/2.0.7...2.2.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=2.0.7&new-version=2.2.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/dataset-viewer/network/alerts). </details>
Bump urllib3 from 2.0.7 to 2.2.2 in /docs in the pip group across 1 directory: Bumps the pip group with 1 update in the /docs directory: [urllib3](https://github.com/urllib3/urllib3). Updates `urllib3` from 2.0.7 to 2.2.2 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p> <blockquote> <h2>2.2.2</h2> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Changes</h2> <ul> <li>Added the <code>Proxy-Authorization</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>.</li> <li>Allowed passing negative integers as <code>amt</code> to read methods of <code>http.client.HTTPResponse</code> as an alternative to <code>None</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3122">#3122</a>)</li> <li>Fixed return types representing copying actions to use <code>typing.Self</code>. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3363">#3363</a>)</li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/urllib3/urllib3/compare/2.2.1...2.2.2">https://github.com/urllib3/urllib3/compare/2.2.1...2.2.2</a></p> <h2>2.2.1</h2> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Changes</h2> <ul> <li>Fixed issue where <code>InsecureRequestWarning</code> was emitted for HTTPS connections when using Emscripten. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3331">#3331</a>)</li> <li>Fixed <code>HTTPConnectionPool.urlopen</code> to stop automatically casting non-proxy headers to <code>HTTPHeaderDict</code>. This change was premature as it did not apply to proxy headers and <code>HTTPHeaderDict</code> does not handle byte header values correctly yet. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3343">#3343</a>)</li> <li>Changed <code>ProtocolError</code> to <code>InvalidChunkLength</code> when response terminates before the chunk length is sent. (<a href="https://redirect.github.com/urllib3/urllib3/issues/2860">#2860</a>)</li> <li>Changed <code>ProtocolError</code> to be more verbose on incomplete reads with excess content. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3261">#3261</a>)</li> </ul> <h2>2.2.0</h2> <h2>🖥️ urllib3 now works in the browser</h2> <p>:tada: <strong>This release adds experimental support for <a href="https://urllib3.readthedocs.io/en/stable/reference/contrib/emscripten.html">using urllib3 in the browser with Pyodide</a>!</strong> :tada:</p> <p>Thanks to Joe Marshall (<a href="https://github.com/joemarshall"><code>@​joemarshall</code></a>) for contributing this feature. This change was possible thanks to work done in urllib3 v2.0 to detach our API from <code>http.client</code>. Please report all bugs to the <a href="https://github.com/urllib3/urllib3/issues">urllib3 issue tracker</a>.</p> <h2>🚀 urllib3 is fundraising for HTTP/2 support</h2> <p><a href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3 is raising ~$40,000 USD</a> to release HTTP/2 support and ensure long-term sustainable maintenance of the project after a sharp decline in financial support for 2023. If your company or organization uses Python and would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and thousands of other projects <a href="https://opencollective.com/urllib3">please consider contributing financially</a> to ensure HTTP/2 support is developed sustainably and maintained for the long-haul.</p> <p>Thank you for your support.</p> <h2>Changes</h2> <ul> <li>Added support for <a href="https://urllib3.readthedocs.io/en/latest/reference/contrib/emscripten.html">Emscripten and Pyodide</a>, including streaming support in cross-origin isolated browser environments where threading is enabled. (<a href="https://redirect.github.com/urllib3/urllib3/issues/2951">#2951</a>)</li> <li>Added support for <code>HTTPResponse.read1()</code> method. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3186">#3186</a>)</li> <li>Added rudimentary support for HTTP/2. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3284">#3284</a>)</li> <li>Fixed issue where requests against urls with trailing dots were failing due to SSL errors when using proxy. (<a href="https://redirect.github.com/urllib3/urllib3/issues/2244">#2244</a>)</li> <li>Fixed <code>HTTPConnection.proxy_is_verified</code> and <code>HTTPSConnection.proxy_is_verified</code> to be always set to a boolean after connecting to a proxy. It could be <code>None</code> in some cases previously. (<a href="https://redirect.github.com/urllib3/urllib3/issues/3130">#3130</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p> <blockquote> <h1>2.2.2 (2024-06-17)</h1> <ul> <li>Added the <code>Proxy-Authorization</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>.</li> <li>Allowed passing negative integers as <code>amt</code> to read methods of <code>http.client.HTTPResponse</code> as an alternative to <code>None</code>. (<code>[#3122](https://github.com/urllib3/urllib3/issues/3122) &lt;https://github.com/urllib3/urllib3/issues/3122&gt;</code>__)</li> <li>Fixed return types representing copying actions to use <code>typing.Self</code>. (<code>[#3363](https://github.com/urllib3/urllib3/issues/3363) &lt;https://github.com/urllib3/urllib3/issues/3363&gt;</code>__)</li> </ul> <h1>2.2.1 (2024-02-16)</h1> <ul> <li>Fixed issue where <code>InsecureRequestWarning</code> was emitted for HTTPS connections when using Emscripten. (<code>[#3331](https://github.com/urllib3/urllib3/issues/3331) &lt;https://github.com/urllib3/urllib3/issues/3331&gt;</code>__)</li> <li>Fixed <code>HTTPConnectionPool.urlopen</code> to stop automatically casting non-proxy headers to <code>HTTPHeaderDict</code>. This change was premature as it did not apply to proxy headers and <code>HTTPHeaderDict</code> does not handle byte header values correctly yet. (<code>[#3343](https://github.com/urllib3/urllib3/issues/3343) &lt;https://github.com/urllib3/urllib3/issues/3343&gt;</code>__)</li> <li>Changed <code>InvalidChunkLength</code> to <code>ProtocolError</code> when response terminates before the chunk length is sent. (<code>[#2860](https://github.com/urllib3/urllib3/issues/2860) &lt;https://github.com/urllib3/urllib3/issues/2860&gt;</code>__)</li> <li>Changed <code>ProtocolError</code> to be more verbose on incomplete reads with excess content. (<code>[#3261](https://github.com/urllib3/urllib3/issues/3261) &lt;https://github.com/urllib3/urllib3/issues/3261&gt;</code>__)</li> </ul> <h1>2.2.0 (2024-01-30)</h1> <ul> <li>Added support for <code>Emscripten and Pyodide &lt;https://urllib3.readthedocs.io/en/latest/reference/contrib/emscripten.html&gt;</code><strong>, including streaming support in cross-origin isolated browser environments where threading is enabled. (<code>[#2951](https://github.com/urllib3/urllib3/issues/2951) &lt;https://github.com/urllib3/urllib3/issues/2951&gt;</code></strong>)</li> <li>Added support for <code>HTTPResponse.read1()</code> method. (<code>[#3186](https://github.com/urllib3/urllib3/issues/3186) &lt;https://github.com/urllib3/urllib3/issues/3186&gt;</code>__)</li> <li>Added rudimentary support for HTTP/2. (<code>[#3284](https://github.com/urllib3/urllib3/issues/3284) &lt;https://github.com/urllib3/urllib3/issues/3284&gt;</code>__)</li> <li>Fixed issue where requests against urls with trailing dots were failing due to SSL errors when using proxy. (<code>[#2244](https://github.com/urllib3/urllib3/issues/2244) &lt;https://github.com/urllib3/urllib3/issues/2244&gt;</code>__)</li> <li>Fixed <code>HTTPConnection.proxy_is_verified</code> and <code>HTTPSConnection.proxy_is_verified</code> to be always set to a boolean after connecting to a proxy. It could be <code>None</code> in some cases previously. (<code>[#3130](https://github.com/urllib3/urllib3/issues/3130) &lt;https://github.com/urllib3/urllib3/issues/3130&gt;</code>__)</li> <li>Fixed an issue where <code>headers</code> passed in a request with <code>json=</code> would be mutated (<code>[#3203](https://github.com/urllib3/urllib3/issues/3203) &lt;https://github.com/urllib3/urllib3/issues/3203&gt;</code>__)</li> <li>Fixed <code>HTTPSConnection.is_verified</code> to be set to <code>False</code> when connecting from a HTTPS proxy to an HTTP target. It was set to <code>True</code> previously. (<code>[#3267](https://github.com/urllib3/urllib3/issues/3267) &lt;https://github.com/urllib3/urllib3/issues/3267&gt;</code>__)</li> <li>Fixed handling of new error message from OpenSSL 3.2.0 when configuring an HTTP proxy as HTTPS (<code>[#3268](https://github.com/urllib3/urllib3/issues/3268) &lt;https://github.com/urllib3/urllib3/issues/3268&gt;</code>__)</li> <li>Fixed TLS 1.3 post-handshake auth when the server certificate validation is disabled (<code>[#3325](https://github.com/urllib3/urllib3/issues/3325) &lt;https://github.com/urllib3/urllib3/issues/3325&gt;</code>__)</li> <li>Note for downstream distributors: To run integration tests, you now need to run the tests a second time with the <code>--integration</code> pytest flag. (<code>[#3181](https://github.com/urllib3/urllib3/issues/3181) &lt;https://github.com/urllib3/urllib3/issues/3181&gt;</code>__)</li> </ul> <h1>2.1.0 (2023-11-13)</h1> <ul> <li>Removed support for the deprecated urllib3[secure] extra. (<code>[#2680](https://github.com/urllib3/urllib3/issues/2680) &lt;https://github.com/urllib3/urllib3/issues/2680&gt;</code>__)</li> <li>Removed support for the deprecated SecureTransport TLS implementation. (<code>[#2681](https://github.com/urllib3/urllib3/issues/2681) &lt;https://github.com/urllib3/urllib3/issues/2681&gt;</code>__)</li> <li>Removed support for the end-of-life Python 3.7. (<code>[#3143](https://github.com/urllib3/urllib3/issues/3143) &lt;https://github.com/urllib3/urllib3/issues/3143&gt;</code>__)</li> <li>Allowed loading CA certificates from memory for proxies. (<code>[#3065](https://github.com/urllib3/urllib3/issues/3065) &lt;https://github.com/urllib3/urllib3/issues/3065&gt;</code>__)</li> <li>Fixed decoding Gzip-encoded responses which specified <code>x-gzip</code> content-encoding. (<code>[#3174](https://github.com/urllib3/urllib3/issues/3174) &lt;https://github.com/urllib3/urllib3/issues/3174&gt;</code>__)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/urllib3/urllib3/commit/27e2a5c5a7ab6a517252cc8dcef3ffa6ffb8f61a"><code>27e2a5c</code></a> Release 2.2.2 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3406">#3406</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/accff72ecc2f6cf5a76d9570198a93ac7c90270e"><code>accff72</code></a> Merge pull request from GHSA-34jh-p97f-mpxf</li> <li><a href="https://github.com/urllib3/urllib3/commit/34be4a57e59eb7365bcc37d52e9f8271b5b8d0d3"><code>34be4a5</code></a> Pin CFFI to a new release candidate instead of a Git commit (<a href="https://redirect.github.com/urllib3/urllib3/issues/3398">#3398</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/da410581b6b3df73da976b5ce5eb20a4bd030437"><code>da41058</code></a> Bump browser-actions/setup-chrome from 1.6.0 to 1.7.1 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3399">#3399</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/b07a669bd970d69847801148286b726f0570b625"><code>b07a669</code></a> Bump github/codeql-action from 2.13.4 to 3.25.6 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3396">#3396</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/b8589ec9f8c4da91511e601b632ac06af7e7c10e"><code>b8589ec</code></a> Measure coverage with v4 of artifact actions (<a href="https://redirect.github.com/urllib3/urllib3/issues/3394">#3394</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/f3bdc5585111429e22c81b5fb26c3ec164d98b81"><code>f3bdc55</code></a> Allow triggering CI manually (<a href="https://redirect.github.com/urllib3/urllib3/issues/3391">#3391</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/52392654b30183129cf3ec06010306f517d9c146"><code>5239265</code></a> Fix HTTP version in debug log (<a href="https://redirect.github.com/urllib3/urllib3/issues/3316">#3316</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/b34619f94ece0c40e691a5aaf1304953d88089de"><code>b34619f</code></a> Bump actions/checkout to 4.1.4 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3387">#3387</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/9961d14de7c920091d42d42ed76d5d479b80064d"><code>9961d14</code></a> Bump browser-actions/setup-chrome from 1.5.0 to 1.6.0 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3386">#3386</a>)</li> <li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/2.0.7...2.2.2">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=2.0.7&new-version=2.2.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/dataset-viewer/network/alerts). </details>
closed
2024-06-17T22:17:57Z
2024-07-27T15:04:09Z
2024-07-27T15:04:01Z
dependabot[bot]
2,356,806,606
Prevents viewer from being pinged for the datasets on both leaderboard orgs
null
Prevents viewer from being pinged for the datasets on both leaderboard orgs:
closed
2024-06-17T09:05:22Z
2024-06-17T09:06:18Z
2024-06-17T09:06:17Z
clefourrier
2,353,738,090
dataset-filetypes is not a small step
It has to do HTTP requests, it can manage big objects, and it depends on `datasets`. So... it might be better to process it with "medium" workers, not "light" ones that generally only transform cached entries.
dataset-filetypes is not a small step: It has to do HTTP requests, it can manage big objects, and it depends on `datasets`. So... it might be better to process it with "medium" workers, not "light" ones that generally only transform cached entries.
closed
2024-06-14T16:43:33Z
2024-06-14T17:05:45Z
2024-06-14T17:05:44Z
severo
2,353,447,269
Additional modalities detection
- [x] detect tabular - [x] detect timeseries
Additional modalities detection: - [x] detect tabular - [x] detect timeseries
closed
2024-06-14T13:58:17Z
2024-06-14T17:05:56Z
2024-06-14T17:05:56Z
severo
2,352,814,842
feat(chart): auto deploy when secrets change
Will automatically redeploy applications when secrets are changed in Infisical (max 1 min after the change).
feat(chart): auto deploy when secrets change: Will automatically redeploy applications when secrets are changed in Infisical (max 1 min after the change).
closed
2024-06-14T08:16:54Z
2024-06-26T08:20:17Z
2024-06-26T08:20:16Z
rtrompier
2,351,595,836
fix: extensions are always lowercase
followup to #2905
fix: extensions are always lowercase: followup to #2905
closed
2024-06-13T16:41:28Z
2024-06-13T20:48:49Z
2024-06-13T20:48:48Z
severo
2,351,457,901
Detect dataset modalities using dataset-filetypes
See #2898
Detect dataset modalities using dataset-filetypes: See #2898
closed
2024-06-13T15:34:21Z
2024-06-14T17:18:51Z
2024-06-14T17:18:50Z
severo
2,351,217,562
Add `started_at` field to cached response documents
Will close https://github.com/huggingface/dataset-viewer/issues/2892 To pass `started_at` field from Job to CachedResponse, I updated `JobInfo` class so that it stores `started_at` info too. It is None by default (and set to actual time by `Queue._start_newest_job_and_delete_others()`). Maybe there is better way to pass this information? I think alternatively I can get a new timestamp inside worker's `Loop.process_next_job` and pass it to `finish` to avoid changing dtos, but I don't know if it's better.
Add `started_at` field to cached response documents: Will close https://github.com/huggingface/dataset-viewer/issues/2892 To pass `started_at` field from Job to CachedResponse, I updated `JobInfo` class so that it stores `started_at` info too. It is None by default (and set to actual time by `Queue._start_newest_job_and_delete_others()`). Maybe there is better way to pass this information? I think alternatively I can get a new timestamp inside worker's `Loop.process_next_job` and pass it to `finish` to avoid changing dtos, but I don't know if it's better.
closed
2024-06-13T13:51:53Z
2024-07-04T15:29:08Z
2024-07-04T15:29:08Z
polinaeterna
2,349,106,855
Move secrets to Infisical
null
Move secrets to Infisical:
closed
2024-06-12T15:44:59Z
2024-06-13T18:17:14Z
2024-06-13T18:17:14Z
rtrompier
2,349,078,160
[Config-parquet-and-info] Compute estimated dataset info
This will be useful to show the estimate number of rows of datasets that are partially converted to Parquet I added `estimated_dataset_info` to the` parquet-and-info` response. It contains estimations of: - download_size - num_bytes - num_examples Then we'll be able to propagate this info to the `size` jobs and then to `hub-cache`. I'll run it on some datasets to check if it works fine and when it's ok I'll re-run the jobs for datasets for which we need to estimate the number of rows. TODO: - [x] compute sizes of non-read files - [x] test with compressed files - [x] test with zip archives - [x] add track_read() test (will add before merging)
[Config-parquet-and-info] Compute estimated dataset info: This will be useful to show the estimate number of rows of datasets that are partially converted to Parquet I added `estimated_dataset_info` to the` parquet-and-info` response. It contains estimations of: - download_size - num_bytes - num_examples Then we'll be able to propagate this info to the `size` jobs and then to `hub-cache`. I'll run it on some datasets to check if it works fine and when it's ok I'll re-run the jobs for datasets for which we need to estimate the number of rows. TODO: - [x] compute sizes of non-read files - [x] test with compressed files - [x] test with zip archives - [x] add track_read() test (will add before merging)
closed
2024-06-12T15:31:57Z
2024-06-19T13:57:05Z
2024-06-19T13:57:04Z
lhoestq
2,348,299,148
Add step dataset-filetypes
needed for https://github.com/huggingface/dataset-viewer/issues/2898#issuecomment-2162460762
Add step dataset-filetypes: needed for https://github.com/huggingface/dataset-viewer/issues/2898#issuecomment-2162460762
closed
2024-06-12T09:37:27Z
2024-06-13T15:13:07Z
2024-06-13T15:13:06Z
severo
2,348,186,694
Fix skipped async tests caused by pytest-memray
Fix skipped async tests caused by pytest-memray: do not pass memray argument to pytest. Fix #2901. Reported underlying issue in pytest-memray: - https://github.com/bloomberg/pytest-memray/issues/119
Fix skipped async tests caused by pytest-memray: Fix skipped async tests caused by pytest-memray: do not pass memray argument to pytest. Fix #2901. Reported underlying issue in pytest-memray: - https://github.com/bloomberg/pytest-memray/issues/119
closed
2024-06-12T08:46:37Z
2024-06-17T13:19:44Z
2024-06-12T11:25:18Z
albertvillanova
2,348,166,719
Pass copies of DataFrames instead of views
As the memory leak may be caused by improperly de-referenced objects, better pass copies of DataFrames instead of views. In a subsequent PR I could try to optimize memory usage by not storing unnecessary data.
Pass copies of DataFrames instead of views: As the memory leak may be caused by improperly de-referenced objects, better pass copies of DataFrames instead of views. In a subsequent PR I could try to optimize memory usage by not storing unnecessary data.
closed
2024-06-12T08:37:00Z
2024-06-20T07:59:38Z
2024-06-20T07:59:38Z
albertvillanova
2,347,919,987
Minor fix id with length of str dataset name
This is a minor fix of some Tasks ids containing the length of of the string dataset name. I discovered this while investigating the memory leak issue.
Minor fix id with length of str dataset name: This is a minor fix of some Tasks ids containing the length of of the string dataset name. I discovered this while investigating the memory leak issue.
closed
2024-06-12T06:30:18Z
2024-06-12T08:04:27Z
2024-06-12T08:03:44Z
albertvillanova
2,347,790,772
Async tests using anyio are skipped after including pytest-memray
Async tests using anyio are currently skipped. See: https://github.com/huggingface/dataset-viewer/actions/runs/9464138411/job/26070809625 ``` tests/test_authentication.py ssssssssssssssssssssssssssssssssssssssssssssss =============================== warnings summary =============================== /home/runner/.cache/pypoetry/virtualenvs/libapi-QfqNi3gs-py3.9/lib/python3.9/site-packages/_pytest/python.py:151: PytestUnhandledCoroutineWarning: async def functions are not natively supported and have been skipped. You need to install a suitable plugin for your async framework, for example: - anyio - pytest-asyncio - pytest-tornasync - pytest-trio - pytest-twisted warnings.warn(PytestUnhandledCoroutineWarning(msg.format(nodeid))) tests/test_authentication.py::test_no_external_auth_check /home/runner/.cache/pypoetry/virtualenvs/libapi-QfqNi3gs-py3.9/lib/python3.9/site-packages/pymongo/topology.py:498: RuntimeWarning: coroutine 'test_no_external_auth_check' was never awaited ... ``` After investigation, I found that the issue arose after including pytest-memray by PR: - https://github.com/huggingface/dataset-viewer/pull/2863 - The issue is caused when passing `--memray` to `pytest`: https://github.com/huggingface/dataset-viewer/blob/04f89d8a76e73874c9198c940ad2e61fe71f6f76/tools/PythonTest.mk#L7 I have reported the issue to pytest-memray team: - https://github.com/bloomberg/pytest-memray/discussions/101#discussioncomment-9738673 - https://github.com/bloomberg/pytest-memray/issues/119
Async tests using anyio are skipped after including pytest-memray: Async tests using anyio are currently skipped. See: https://github.com/huggingface/dataset-viewer/actions/runs/9464138411/job/26070809625 ``` tests/test_authentication.py ssssssssssssssssssssssssssssssssssssssssssssss =============================== warnings summary =============================== /home/runner/.cache/pypoetry/virtualenvs/libapi-QfqNi3gs-py3.9/lib/python3.9/site-packages/_pytest/python.py:151: PytestUnhandledCoroutineWarning: async def functions are not natively supported and have been skipped. You need to install a suitable plugin for your async framework, for example: - anyio - pytest-asyncio - pytest-tornasync - pytest-trio - pytest-twisted warnings.warn(PytestUnhandledCoroutineWarning(msg.format(nodeid))) tests/test_authentication.py::test_no_external_auth_check /home/runner/.cache/pypoetry/virtualenvs/libapi-QfqNi3gs-py3.9/lib/python3.9/site-packages/pymongo/topology.py:498: RuntimeWarning: coroutine 'test_no_external_auth_check' was never awaited ... ``` After investigation, I found that the issue arose after including pytest-memray by PR: - https://github.com/huggingface/dataset-viewer/pull/2863 - The issue is caused when passing `--memray` to `pytest`: https://github.com/huggingface/dataset-viewer/blob/04f89d8a76e73874c9198c940ad2e61fe71f6f76/tools/PythonTest.mk#L7 I have reported the issue to pytest-memray team: - https://github.com/bloomberg/pytest-memray/discussions/101#discussioncomment-9738673 - https://github.com/bloomberg/pytest-memray/issues/119
closed
2024-06-12T04:41:05Z
2024-06-12T11:25:19Z
2024-06-12T11:25:18Z
albertvillanova
2,346,612,106
2754 partial instead of error
fix #2754
2754 partial instead of error: fix #2754
closed
2024-06-11T14:40:02Z
2024-06-14T13:00:31Z
2024-06-13T13:57:19Z
severo
2,346,610,285
Standardize access to metrics and healthcheck
In some apps, the metrics and healthcheck are public: - https://datasets-server.huggingface.co/admin/metrics - https://datasets-server.huggingface.co/sse/metrics - https://datasets-server.huggingface.co/sse/healthcheck - https://datasets-server.huggingface.co/healthcheck - On others, it’s forbidden or not found: - https://datasets-server.huggingface.co/metrics - https://datasets-server.huggingface.co/filter/metrics As @severo suggests, it should be coherent among all the services. (Do we want the metrics to be public, or not?)
Standardize access to metrics and healthcheck: In some apps, the metrics and healthcheck are public: - https://datasets-server.huggingface.co/admin/metrics - https://datasets-server.huggingface.co/sse/metrics - https://datasets-server.huggingface.co/sse/healthcheck - https://datasets-server.huggingface.co/healthcheck - On others, it’s forbidden or not found: - https://datasets-server.huggingface.co/metrics - https://datasets-server.huggingface.co/filter/metrics As @severo suggests, it should be coherent among all the services. (Do we want the metrics to be public, or not?)
open
2024-06-11T14:39:10Z
2024-07-11T15:38:17Z
null
AndreaFrancis
2,346,536,771
detect more modalities
Currently, we only detect and report "audio", "image" and "text". Ideally, we would have: <img width="318" alt="Capture d’écran 2024-06-11 à 16 07 20" src="https://github.com/huggingface/dataset-viewer/assets/1676121/1a21dfff-5c78-45bd-8baf-de1f4b203b6a"> See https://github.com/huggingface/moon-landing/pull/9352#discussion_r1634909052 (internal) --- TODO: - [x] text - [x] image - [x] audio - [ ] video: detect extensions in the repo files: mp4, avi, mkv, etc. - [ ] 3d: detect extensions in the repo files: ply, glb, gltf, etc. - [ ] tabular <- I'm not sure what it means. Only numbers? - [ ] geospatial. Let's include both raster (e.g., satellite images) and vector (e.g. latitude/longitud points) even if they have few things in common and are generally in different datasets. Several strategies: extensions (geojson, geoparquet, geotiff, etc), column types (see geoarrow), column names? (longitude, latitude, geom), column contents (see wkt/wkb - but I don't think we want to look inside the data itself) - [ ] time-series: columns with type `Sequence(Value(float32))` which are not embeddings (filter against the column name: `emb`?) --- Currently, we only check the features given by the datasets library (`dataset-info` step). For the strategy based on looking at the file extensions, it's a bit imprecise (eg: if the YAML contains `data_files: data.csv` and the repo also contains a video, the dataset will be marked as video, while the video is ignored). We can bear with it for now, I think. Should we also detect repos that contain images or audio files but are not supported by the datasets library / the viewer?
detect more modalities: Currently, we only detect and report "audio", "image" and "text". Ideally, we would have: <img width="318" alt="Capture d’écran 2024-06-11 à 16 07 20" src="https://github.com/huggingface/dataset-viewer/assets/1676121/1a21dfff-5c78-45bd-8baf-de1f4b203b6a"> See https://github.com/huggingface/moon-landing/pull/9352#discussion_r1634909052 (internal) --- TODO: - [x] text - [x] image - [x] audio - [ ] video: detect extensions in the repo files: mp4, avi, mkv, etc. - [ ] 3d: detect extensions in the repo files: ply, glb, gltf, etc. - [ ] tabular <- I'm not sure what it means. Only numbers? - [ ] geospatial. Let's include both raster (e.g., satellite images) and vector (e.g. latitude/longitud points) even if they have few things in common and are generally in different datasets. Several strategies: extensions (geojson, geoparquet, geotiff, etc), column types (see geoarrow), column names? (longitude, latitude, geom), column contents (see wkt/wkb - but I don't think we want to look inside the data itself) - [ ] time-series: columns with type `Sequence(Value(float32))` which are not embeddings (filter against the column name: `emb`?) --- Currently, we only check the features given by the datasets library (`dataset-info` step). For the strategy based on looking at the file extensions, it's a bit imprecise (eg: if the YAML contains `data_files: data.csv` and the repo also contains a video, the dataset will be marked as video, while the video is ignored). We can bear with it for now, I think. Should we also detect repos that contain images or audio files but are not supported by the datasets library / the viewer?
closed
2024-06-11T14:07:26Z
2024-06-14T17:18:52Z
2024-06-14T17:18:51Z
severo
2,346,351,897
Use `HfFileSystem` in config-parquet-metadata step instead of `HttpFileSystem`
`config-parquet-metadata` step is failing again for [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) with errors like ``` "Could not read the parquet files: 504, message='Gateway Time-out', url=URL('https://huggingface.co/datasets/HuggingFaceFW/fineweb/resolve/refs%2Fconvert%2Fparquet/default/train-part1/4089.parquet')" ``` Maybe this would help (the same is used in `config-parquet-and-info` step which works). Previous fix was https://github.com/huggingface/dataset-viewer/pull/2884
Use `HfFileSystem` in config-parquet-metadata step instead of `HttpFileSystem`: `config-parquet-metadata` step is failing again for [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) with errors like ``` "Could not read the parquet files: 504, message='Gateway Time-out', url=URL('https://huggingface.co/datasets/HuggingFaceFW/fineweb/resolve/refs%2Fconvert%2Fparquet/default/train-part1/4089.parquet')" ``` Maybe this would help (the same is used in `config-parquet-and-info` step which works). Previous fix was https://github.com/huggingface/dataset-viewer/pull/2884
closed
2024-06-11T12:50:24Z
2024-06-13T16:09:14Z
2024-06-13T16:09:13Z
polinaeterna
2,345,865,728
Remove Prometheus context label
Remove Prometheus context label. Fix #2895.
Remove Prometheus context label: Remove Prometheus context label. Fix #2895.
closed
2024-06-11T09:18:13Z
2024-06-11T11:23:11Z
2024-06-11T10:44:09Z
albertvillanova
2,345,427,383
Too high label cardinality metrics in Prometheus
As discussed privately (CC: @McPatate), we may have too high label cardinality metrics in Prometheus. As stated in Prometheus docs: https://prometheus.io/docs/practices/naming/#labels > CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values. Currently, our `StepProfiler` uses 3 labels: `method`, `step` and `context`. Recently, we fixed an issue with object memory addresses as values in the `context` label: - #2893 Note that sometimes we pass the dataset name in the `context` label, for example in `DatasetBackfillPlan`: https://github.com/huggingface/dataset-viewer/blob/19c049499af1b37cf4d9dfa93b58657d66b75337/libs/libcommon/src/libcommon/orchestrator.py#L490 Other times we pass the dataset and the config names in the `context` label, for example in `ConfigState`: https://github.com/huggingface/dataset-viewer/blob/19c049499af1b37cf4d9dfa93b58657d66b75337/libs/libcommon/src/libcommon/state.py#L202 For the (`method`, `step`, `context`) labels, this creates (number_of_method_values * number_of_step_values * **number_of_datasets**) or (number_of_method_values * number_of_step_values * **number_of_dataset_configs**) unique label values! This might be a too large cardinality. One possible solution to reduce cardinality would be to suppress the `context` label. What do you think? Other proposals? CC: @huggingface/dataset-viewer
Too high label cardinality metrics in Prometheus: As discussed privately (CC: @McPatate), we may have too high label cardinality metrics in Prometheus. As stated in Prometheus docs: https://prometheus.io/docs/practices/naming/#labels > CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values. Currently, our `StepProfiler` uses 3 labels: `method`, `step` and `context`. Recently, we fixed an issue with object memory addresses as values in the `context` label: - #2893 Note that sometimes we pass the dataset name in the `context` label, for example in `DatasetBackfillPlan`: https://github.com/huggingface/dataset-viewer/blob/19c049499af1b37cf4d9dfa93b58657d66b75337/libs/libcommon/src/libcommon/orchestrator.py#L490 Other times we pass the dataset and the config names in the `context` label, for example in `ConfigState`: https://github.com/huggingface/dataset-viewer/blob/19c049499af1b37cf4d9dfa93b58657d66b75337/libs/libcommon/src/libcommon/state.py#L202 For the (`method`, `step`, `context`) labels, this creates (number_of_method_values * number_of_step_values * **number_of_datasets**) or (number_of_method_values * number_of_step_values * **number_of_dataset_configs**) unique label values! This might be a too large cardinality. One possible solution to reduce cardinality would be to suppress the `context` label. What do you think? Other proposals? CC: @huggingface/dataset-viewer
closed
2024-06-11T05:29:14Z
2024-06-11T10:44:10Z
2024-06-11T10:44:10Z
albertvillanova
2,343,312,605
feat(ci): add trufflehog secrets detection
### What does this PR do? Adding a GH action to scan for leaked secrets on each commit. ### Context `trufflehog` will scan the commit that triggered the CI for any token leak. `trufflehog` works with a large number of what they call "detectors", each of which will read the text from the commit to see if there is match for a token. For example, the hugging face detector will check for hf tokens and then query our `/api/whoami{-v2}` endpoint to check if the token is valid. If it detects a valid token, the CI will fail, informing you that you need to rotate the token given it leaked. ### References - https://github.com/trufflesecurity/trufflehog - https://github.com/marketplace/actions/trufflehog-oss
feat(ci): add trufflehog secrets detection: ### What does this PR do? Adding a GH action to scan for leaked secrets on each commit. ### Context `trufflehog` will scan the commit that triggered the CI for any token leak. `trufflehog` works with a large number of what they call "detectors", each of which will read the text from the commit to see if there is match for a token. For example, the hugging face detector will check for hf tokens and then query our `/api/whoami{-v2}` endpoint to check if the token is valid. If it detects a valid token, the CI will fail, informing you that you need to rotate the token given it leaked. ### References - https://github.com/trufflesecurity/trufflehog - https://github.com/marketplace/actions/trufflehog-oss
closed
2024-06-10T08:57:46Z
2024-06-10T09:18:37Z
2024-06-10T09:18:36Z
McPatate
2,343,269,192
Fix string representation of storage client
Fix string representation of storage client. The string representation of the storage client is used by the orchestrator `DeleteDatasetStorageTask` in both the task `id` attribute and as a `label` for prometheus_client Histogram: - https://github.com/huggingface/dataset-viewer/blob/1ea458cb22396f8af955bf5b1ebbb92d89f0a707/libs/libcommon/src/libcommon/orchestrator.py#L221 - https://github.com/huggingface/dataset-viewer/blob/1ea458cb22396f8af955bf5b1ebbb92d89f0a707/libs/libcommon/src/libcommon/orchestrator.py#L234 Since the inclusion of the URL signer to the storage client by PR: - #2298 the string representation of the storage client is something like: ``` StorageClient(protocol=s3, storage_root=hf-datasets-server-statics-test/assets, base_url=https://datasets-server-test.us.dev.moon.huggingface.tech/assets, overwrite=True), url_signer=<libcommon.cloudfront.CloudFront object at 0x55f27a209110>) ``` - See the url_signer: `url_signer=<libcommon.cloudfront.CloudFront object at 0x55f27a209110>` - This is a different value for each `CloudFront` instantiation This PR fixes the string representation of the URLSigner to be its class name, and therefore it fixes as well the string representation of the storage client to be something like: ``` StorageClient(protocol=s3, storage_root=hf-datasets-server-statics-test/assets, base_url=https://datasets-server-test.us.dev.moon.huggingface.tech/assets, overwrite=True), url_signer=CloudFront) ``` - See the url_signer is now: `url_signer=CloudFront`
Fix string representation of storage client: Fix string representation of storage client. The string representation of the storage client is used by the orchestrator `DeleteDatasetStorageTask` in both the task `id` attribute and as a `label` for prometheus_client Histogram: - https://github.com/huggingface/dataset-viewer/blob/1ea458cb22396f8af955bf5b1ebbb92d89f0a707/libs/libcommon/src/libcommon/orchestrator.py#L221 - https://github.com/huggingface/dataset-viewer/blob/1ea458cb22396f8af955bf5b1ebbb92d89f0a707/libs/libcommon/src/libcommon/orchestrator.py#L234 Since the inclusion of the URL signer to the storage client by PR: - #2298 the string representation of the storage client is something like: ``` StorageClient(protocol=s3, storage_root=hf-datasets-server-statics-test/assets, base_url=https://datasets-server-test.us.dev.moon.huggingface.tech/assets, overwrite=True), url_signer=<libcommon.cloudfront.CloudFront object at 0x55f27a209110>) ``` - See the url_signer: `url_signer=<libcommon.cloudfront.CloudFront object at 0x55f27a209110>` - This is a different value for each `CloudFront` instantiation This PR fixes the string representation of the URLSigner to be its class name, and therefore it fixes as well the string representation of the storage client to be something like: ``` StorageClient(protocol=s3, storage_root=hf-datasets-server-statics-test/assets, base_url=https://datasets-server-test.us.dev.moon.huggingface.tech/assets, overwrite=True), url_signer=CloudFront) ``` - See the url_signer is now: `url_signer=CloudFront`
closed
2024-06-10T08:40:59Z
2024-06-10T09:41:23Z
2024-06-10T09:41:22Z
albertvillanova
2,340,448,231
Store `started_at` or duration info in cached steps too
This would help to better monitor how much time a step take, especially depending on dataset size, so that we can see how certain changes influence processing speed and make more informed decisions about these changes and our size limits (see https://github.com/huggingface/dataset-viewer/issues/2878). Especially useful for duckdb and statistics step. I can work on this if you think it's useful (I think it is) and don't break any logic.
Store `started_at` or duration info in cached steps too: This would help to better monitor how much time a step take, especially depending on dataset size, so that we can see how certain changes influence processing speed and make more informed decisions about these changes and our size limits (see https://github.com/huggingface/dataset-viewer/issues/2878). Especially useful for duckdb and statistics step. I can work on this if you think it's useful (I think it is) and don't break any logic.
closed
2024-06-07T13:22:50Z
2024-07-09T13:06:36Z
2024-07-09T13:06:36Z
polinaeterna
2,339,758,443
Make StorageClient not warn when deleting a non-existing directory
Make StorageClient not warn when trying to delete a non-existing directory. I think we should only log a warning if the directory exists and could not be deleted.
Make StorageClient not warn when deleting a non-existing directory: Make StorageClient not warn when trying to delete a non-existing directory. I think we should only log a warning if the directory exists and could not be deleted.
closed
2024-06-07T07:12:26Z
2024-06-10T13:54:00Z
2024-06-10T13:53:59Z
albertvillanova
2,338,816,983
No mongo cache in DatasetRemovalPlan
don't keep the full query result in memory (as mongoengine does by default) this should reduce the frequency of memory spikes and could have an effect of the memory issues we're having
No mongo cache in DatasetRemovalPlan: don't keep the full query result in memory (as mongoengine does by default) this should reduce the frequency of memory spikes and could have an effect of the memory issues we're having
closed
2024-06-06T17:33:46Z
2024-06-07T11:17:46Z
2024-06-07T11:17:45Z
lhoestq
2,338,519,538
No auto backfill on most nfaa datasets
they correlate with memory spikes, I'm just hardcoding this to see the impact on memory and further investigate
No auto backfill on most nfaa datasets: they correlate with memory spikes, I'm just hardcoding this to see the impact on memory and further investigate
closed
2024-06-06T15:20:27Z
2024-06-06T15:59:12Z
2024-06-06T15:59:11Z
lhoestq
2,338,451,407
Fix get_shape in statistics when argument is bytes, not dict
Will fix duckdb-index step for [common-canvas/commoncatalog-cc-by](https://huggingface.co/datasets/common-canvas/commoncatalog-cc-by)
Fix get_shape in statistics when argument is bytes, not dict : Will fix duckdb-index step for [common-canvas/commoncatalog-cc-by](https://huggingface.co/datasets/common-canvas/commoncatalog-cc-by)
closed
2024-06-06T14:47:47Z
2024-06-06T16:58:07Z
2024-06-06T16:58:06Z
polinaeterna
2,337,295,236
Update ruff to 0.4.8
Update ruff to 0.4.8: https://github.com/astral-sh/ruff/releases/tag/v0.4.8 > Linter performance has been improved by around 10% on some microbenchmarks
Update ruff to 0.4.8: Update ruff to 0.4.8: https://github.com/astral-sh/ruff/releases/tag/v0.4.8 > Linter performance has been improved by around 10% on some microbenchmarks
closed
2024-06-06T04:41:04Z
2024-06-06T13:27:24Z
2024-06-06T10:16:06Z
albertvillanova
2,336,241,023
Update uvicorn (restart expired workers)
null
Update uvicorn (restart expired workers):
closed
2024-06-05T15:38:54Z
2024-06-06T10:27:10Z
2024-06-06T10:02:46Z
lhoestq
2,336,103,264
add missing deps to dev images
otherwise I can't build the images on mac m2 I didn't touch the prod images
add missing deps to dev images: otherwise I can't build the images on mac m2 I didn't touch the prod images
closed
2024-06-05T14:36:42Z
2024-06-06T13:28:48Z
2024-06-06T13:28:47Z
lhoestq
2,335,801,984
Add retry mechanism to get_parquet_file in parquet metadata step
...to see if it helps with [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) `config-parquet-metadata` issue. Currently the error says just `Server disconnected` which seems to be a `aiohttp.ServerDisconnectedError` error. If that works, a more fundamental solution would be to completely switch to `HfFyleSystem` instead of `HTTPFileSystem` and remove retries.
Add retry mechanism to get_parquet_file in parquet metadata step: ...to see if it helps with [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) `config-parquet-metadata` issue. Currently the error says just `Server disconnected` which seems to be a `aiohttp.ServerDisconnectedError` error. If that works, a more fundamental solution would be to completely switch to `HfFyleSystem` instead of `HTTPFileSystem` and remove retries.
closed
2024-06-05T12:41:23Z
2024-06-05T15:06:59Z
2024-06-05T13:25:30Z
polinaeterna
2,334,890,871
Update pytest to 8.2.2 and pytest-asyncio to 0.23.7
Update pytest to 8.2.2: https://github.com/pytest-dev/pytest/releases/tag/8.2.2 The update of pytest requires the update of pytest-asyncio as well. See: - https://github.com/pytest-dev/pytest-asyncio/pull/823 Otherwise, we get an AttributeError: https://github.com/huggingface/dataset-viewer/actions/runs/9378353611/job/25821435791 ``` AttributeError: 'FixtureDef' object has no attribute 'unittest' ``` ``` ==================================== ERRORS ==================================== ____________________ ERROR at setup of test_get_healthcheck ____________________ event_loop = <_UnixSelectorEventLoop running=False closed=False debug=False> request = <SubRequest 'app_test' for <Function test_get_healthcheck>> kwargs = {'app_config': AppConfig(api=ApiConfig(external_auth_url='http://localhost:8888/api/datasets/%s/auth-check', hf_auth_p...onfig(level=20), queue=QueueConfig(mongo_database='dataset_viewer_queue_test', mongo_url='mongodb://localhost:27017'))} @functools.wraps(fixture) def _asyncgen_fixture_wrapper( event_loop: asyncio.AbstractEventLoop, request: SubRequest, **kwargs: Any ): func = _perhaps_rebind_fixture_func( > fixture, request.instance, fixturedef.unittest ) E AttributeError: 'FixtureDef' object has no attribute 'unittest' /home/runner/.cache/pypoetry/virtualenvs/sse-api-Z9Qw0xoN-py3.9/lib/python3.9/site-packages/pytest_asyncio/plugin.py:276: AttributeError =========================== short test summary info ============================ ERROR tests/test_app.py::test_get_healthcheck - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_metrics - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_updates - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_initialization[?all=true-expected_events0] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_initialization[-expected_events1] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_initialization[?all=false-expected_events2] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_initialization_and_updates[?all=true-expected_events0] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_initialization_and_updates[?all=false-expected_events1] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_initialization_and_updates[-expected_events2] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ========================= 1 passed, 9 errors in 1.09s ========================== ```
Update pytest to 8.2.2 and pytest-asyncio to 0.23.7: Update pytest to 8.2.2: https://github.com/pytest-dev/pytest/releases/tag/8.2.2 The update of pytest requires the update of pytest-asyncio as well. See: - https://github.com/pytest-dev/pytest-asyncio/pull/823 Otherwise, we get an AttributeError: https://github.com/huggingface/dataset-viewer/actions/runs/9378353611/job/25821435791 ``` AttributeError: 'FixtureDef' object has no attribute 'unittest' ``` ``` ==================================== ERRORS ==================================== ____________________ ERROR at setup of test_get_healthcheck ____________________ event_loop = <_UnixSelectorEventLoop running=False closed=False debug=False> request = <SubRequest 'app_test' for <Function test_get_healthcheck>> kwargs = {'app_config': AppConfig(api=ApiConfig(external_auth_url='http://localhost:8888/api/datasets/%s/auth-check', hf_auth_p...onfig(level=20), queue=QueueConfig(mongo_database='dataset_viewer_queue_test', mongo_url='mongodb://localhost:27017'))} @functools.wraps(fixture) def _asyncgen_fixture_wrapper( event_loop: asyncio.AbstractEventLoop, request: SubRequest, **kwargs: Any ): func = _perhaps_rebind_fixture_func( > fixture, request.instance, fixturedef.unittest ) E AttributeError: 'FixtureDef' object has no attribute 'unittest' /home/runner/.cache/pypoetry/virtualenvs/sse-api-Z9Qw0xoN-py3.9/lib/python3.9/site-packages/pytest_asyncio/plugin.py:276: AttributeError =========================== short test summary info ============================ ERROR tests/test_app.py::test_get_healthcheck - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_metrics - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_updates - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_initialization[?all=true-expected_events0] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_initialization[-expected_events1] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_only_initialization[?all=false-expected_events2] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_initialization_and_updates[?all=true-expected_events0] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_initialization_and_updates[?all=false-expected_events1] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ERROR tests/test_app.py::test_hub_cache_initialization_and_updates[-expected_events2] - AttributeError: 'FixtureDef' object has no attribute 'unittest' ========================= 1 passed, 9 errors in 1.09s ========================== ```
closed
2024-06-05T04:48:39Z
2024-06-06T10:42:21Z
2024-06-06T10:42:20Z
albertvillanova
2,333,534,259
Apply recommendations from duckdb to improve speed
DuckDB has a dedicated page called "My Workload Is Slow" https://duckdb.org/docs/guides/performance/my_workload_is_slow and more generally all the https://duckdb.org/docs/guides/performance/overview section. It could be good to review if some recommendations apply to our usage of duckdb.
Apply recommendations from duckdb to improve speed: DuckDB has a dedicated page called "My Workload Is Slow" https://duckdb.org/docs/guides/performance/my_workload_is_slow and more generally all the https://duckdb.org/docs/guides/performance/overview section. It could be good to review if some recommendations apply to our usage of duckdb.
open
2024-06-04T13:24:33Z
2024-06-04T13:48:48Z
null
severo
2,333,396,475
Remove canonical datasets from docs
Remove canonical datasets from docs, now that we no longer have canonical datasets.
Remove canonical datasets from docs: Remove canonical datasets from docs, now that we no longer have canonical datasets.
closed
2024-06-04T12:22:43Z
2024-07-08T06:34:01Z
2024-07-08T06:34:01Z
albertvillanova
2,333,376,108
Allow mnist and fashion mnist + remove canonical dataset logic
null
Allow mnist and fashion mnist + remove canonical dataset logic:
closed
2024-06-04T12:13:35Z
2024-06-04T13:16:32Z
2024-06-04T13:16:32Z
lhoestq
2,331,996,540
Use pymongoarrow to get dataset results as dataframe
Fix for https://github.com/huggingface/dataset-viewer/issues/2868
Use pymongoarrow to get dataset results as dataframe: Fix for https://github.com/huggingface/dataset-viewer/issues/2868
closed
2024-06-03T20:29:48Z
2024-06-05T13:32:08Z
2024-06-05T13:32:07Z
AndreaFrancis
2,330,565,896
Remove or increase the 5GB limit?
The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits. Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination). Should we provide a way to increase or remove this limit?
Remove or increase the 5GB limit?: The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits. Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination). Should we provide a way to increase or remove this limit?
closed
2024-06-03T08:55:08Z
2024-07-22T11:32:49Z
2024-07-11T15:04:04Z
severo
2,330,348,030
Update ruff to 0.4.7
Update ruff to 0.4.7.
Update ruff to 0.4.7: Update ruff to 0.4.7.
closed
2024-06-03T07:08:23Z
2024-06-03T08:46:25Z
2024-06-03T08:46:24Z
albertvillanova
2,326,204,531
Feature Request: Freeze/Restart/Log Viewer Option for Users.
### Description I've noticed a couple of things that could make using the dataset-viewer better. Here are three simple suggestions based on what I've experienced: ### Suggestions 1. **Keeping Parquet Viewer Steady** - Issue: Every time I tweak something like the content in the README, the Parquet viewer resets and tries to rebuild it. - Solution Idea: Add a "freeze" button for users so it stays where it is. 2. **Easier Viewer Restart** - Issue: It's definitely annoying to have to restart the viewer manually all the time for the dataset-viewer maintainers. - Solution Idea: Create a "restart" button for users so they can quickly reset the viewer. 3. **Log** - Issue: Just having access to the log of dataset-viewer errors would be very helpful. But the current log report is too abstract! - Solution Idea: Improve the error log report for better understanding. Thanks for considering these suggestions!
Feature Request: Freeze/Restart/Log Viewer Option for Users.: ### Description I've noticed a couple of things that could make using the dataset-viewer better. Here are three simple suggestions based on what I've experienced: ### Suggestions 1. **Keeping Parquet Viewer Steady** - Issue: Every time I tweak something like the content in the README, the Parquet viewer resets and tries to rebuild it. - Solution Idea: Add a "freeze" button for users so it stays where it is. 2. **Easier Viewer Restart** - Issue: It's definitely annoying to have to restart the viewer manually all the time for the dataset-viewer maintainers. - Solution Idea: Create a "restart" button for users so they can quickly reset the viewer. 3. **Log** - Issue: Just having access to the log of dataset-viewer errors would be very helpful. But the current log report is too abstract! - Solution Idea: Improve the error log report for better understanding. Thanks for considering these suggestions!
closed
2024-05-30T17:27:23Z
2024-05-31T08:51:34Z
2024-05-31T08:51:33Z
kargaranamir
2,326,109,839
Create a new error code (retryable) for "Consistency check failed"
See https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData/viewer/mal_Mlym It currently gives this error: ``` Consistency check failed: file should be of size 13393376 but has size 11730856 ((…)a9b50f83/mal_Mlym/taxi1500/dataset.arrow). We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument. If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub. ``` discussion here: https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData/discussions/1 We should catch the exception and raise a new error, that should be a "retryable" error.
Create a new error code (retryable) for "Consistency check failed": See https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData/viewer/mal_Mlym It currently gives this error: ``` Consistency check failed: file should be of size 13393376 but has size 11730856 ((…)a9b50f83/mal_Mlym/taxi1500/dataset.arrow). We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument. If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub. ``` discussion here: https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData/discussions/1 We should catch the exception and raise a new error, that should be a "retryable" error.
closed
2024-05-30T16:42:22Z
2024-05-31T08:18:47Z
2024-05-31T08:18:47Z
severo
2,325,808,452
Re-add torch dependency
needed for webdataset with .pth files in them
Re-add torch dependency: needed for webdataset with .pth files in them
closed
2024-05-30T14:22:00Z
2024-05-30T16:15:46Z
2024-05-30T16:15:45Z
lhoestq
2,325,626,169
add "duration" field to audio cells
As it was done with width/height for image cells: https://github.com/huggingface/dataset-viewer/pull/600 It will help to show additional information in the dataset viewer, in particular: highlighting a row will select the appropriate bar in the durations histogram.
add "duration" field to audio cells: As it was done with width/height for image cells: https://github.com/huggingface/dataset-viewer/pull/600 It will help to show additional information in the dataset viewer, in particular: highlighting a row will select the appropriate bar in the durations histogram.
open
2024-05-30T13:00:16Z
2024-05-30T16:26:52Z
null
severo
2,325,300,313
BFF endpoint to replace multiple parallel requests
WIP
BFF endpoint to replace multiple parallel requests: WIP
closed
2024-05-30T10:22:45Z
2024-07-29T11:37:24Z
2024-07-07T15:04:11Z
severo