id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
βŒ€
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
βŒ€
user
stringclasses
174 values
1,150,279,700
fix: πŸ› add random jitter to the duration the workers sleep
It helps preventing concurrent workers to pick a job at the exact same time
fix: πŸ› add random jitter to the duration the workers sleep: It helps preventing concurrent workers to pick a job at the exact same time
closed
2022-02-25T10:21:13Z
2022-02-25T10:21:21Z
2022-02-25T10:21:21Z
severo
1,150,261,339
fix: πŸ› fix incoherencies due to concurrency in the queue
Just log and ignore if queue jobs are in an inexpected state.
fix: πŸ› fix incoherencies due to concurrency in the queue: Just log and ignore if queue jobs are in an inexpected state.
closed
2022-02-25T10:02:45Z
2022-02-25T10:03:58Z
2022-02-25T10:03:58Z
severo
1,150,229,037
fix: πŸ› fix concurrency between workers
null
fix: πŸ› fix concurrency between workers:
closed
2022-02-25T09:28:17Z
2022-02-25T09:28:24Z
2022-02-25T09:28:24Z
severo
1,150,204,269
feat: 🎸 add MAX_JOBS_PER_DATASET to improve queue availability
Some datasets (https://huggingface.co/datasets/lvwerra/github-code) have more than 100 splits. To avoid congesting the queue when such a dataset is refreshed, the number of concurrent jobs for the same dataset is now limited (to 2 jobs per default using MAX_JOBS_PER_DATASET environment variable).
feat: 🎸 add MAX_JOBS_PER_DATASET to improve queue availability: Some datasets (https://huggingface.co/datasets/lvwerra/github-code) have more than 100 splits. To avoid congesting the queue when such a dataset is refreshed, the number of concurrent jobs for the same dataset is now limited (to 2 jobs per default using MAX_JOBS_PER_DATASET environment variable).
closed
2022-02-25T09:02:28Z
2022-02-25T09:02:37Z
2022-02-25T09:02:37Z
severo
1,149,509,529
Double limit on size
null
Double limit on size:
closed
2022-02-24T16:42:23Z
2022-02-24T16:49:42Z
2022-02-24T16:49:42Z
severo
1,149,237,889
Add max size
null
Add max size:
closed
2022-02-24T12:37:43Z
2022-02-24T12:46:57Z
2022-02-24T12:46:56Z
severo
1,148,326,406
limit the size of /rows response
See https://github.com/huggingface/moon-landing/pull/2147, and https://github.com/huggingface/moon-landing/pull/2147#issuecomment-1048976581 in particular: > BTW, I'm thinking I should have implemented this on the backend, not here (or only as a safeguard). Indeed, we already artificially limit to 100 rows on the backend, so why not by the size too. > > Also: it does not make sense here for three reasons, for problematic datasets: > > - why fetch a very large JSON of 100 rows if we will slice it to 0 or 1 row? > - requesting 100 rows might generate a lot of load to the server to create the JSON > - if the JSON is really big, the fetch might timeout, which is silly, since it would have been sliced afterward.
limit the size of /rows response: See https://github.com/huggingface/moon-landing/pull/2147, and https://github.com/huggingface/moon-landing/pull/2147#issuecomment-1048976581 in particular: > BTW, I'm thinking I should have implemented this on the backend, not here (or only as a safeguard). Indeed, we already artificially limit to 100 rows on the backend, so why not by the size too. > > Also: it does not make sense here for three reasons, for problematic datasets: > > - why fetch a very large JSON of 100 rows if we will slice it to 0 or 1 row? > - requesting 100 rows might generate a lot of load to the server to create the JSON > - if the JSON is really big, the fetch might timeout, which is silly, since it would have been sliced afterward.
closed
2022-02-23T16:42:38Z
2022-02-24T16:55:01Z
2022-02-24T12:46:57Z
severo
1,147,921,090
feat: 🎸 remove lvwerra/github-code from the blocklist
null
feat: 🎸 remove lvwerra/github-code from the blocklist:
closed
2022-02-23T10:36:26Z
2022-02-23T10:36:31Z
2022-02-23T10:36:31Z
severo
1,146,943,006
refactor: πŸ’‘ Use datasets' DownloadMode enum
It solves the type error, and simplifies the code. `datasets` is upgraded
refactor: πŸ’‘ Use datasets' DownloadMode enum: It solves the type error, and simplifies the code. `datasets` is upgraded
closed
2022-02-22T13:55:51Z
2022-02-22T13:55:58Z
2022-02-22T13:55:57Z
severo
1,146,731,391
[refactor] use `datasets` enum instead of strings for download_mode
Once https://github.com/huggingface/datasets/pull/3759 has been merged
[refactor] use `datasets` enum instead of strings for download_mode: Once https://github.com/huggingface/datasets/pull/3759 has been merged
closed
2022-02-22T10:31:57Z
2022-02-22T13:56:28Z
2022-02-22T13:56:22Z
severo
1,146,722,424
feat: 🎸 upgrade py7zr and update the safety checks
Removes two exceptions in "safety" check, since py7zr and numpy (via datasets) have been upgraded. https://github.com/huggingface/datasets-preview-backend/issues/132 is still not fixed, and it still depends on the upstream datasets library.
feat: 🎸 upgrade py7zr and update the safety checks: Removes two exceptions in "safety" check, since py7zr and numpy (via datasets) have been upgraded. https://github.com/huggingface/datasets-preview-backend/issues/132 is still not fixed, and it still depends on the upstream datasets library.
closed
2022-02-22T10:23:57Z
2022-02-22T14:12:24Z
2022-02-22T10:26:02Z
severo
1,146,708,278
feat: 🎸 remove direct dependency to pandas
Also: upgrade other dependencies to minor versions. Fixes https://github.com/huggingface/datasets-preview-backend/issues/143
feat: 🎸 remove direct dependency to pandas: Also: upgrade other dependencies to minor versions. Fixes https://github.com/huggingface/datasets-preview-backend/issues/143
closed
2022-02-22T10:11:39Z
2022-02-22T10:14:00Z
2022-02-22T10:13:59Z
severo
1,146,700,846
fix common_voice
See https://github.com/huggingface/datasets-preview-backend/blob/f54543709237215db38dc4064ce6710f6e974d79/tests/models/test_row.py#L65 and https://github.com/huggingface/datasets-preview-backend/blob/f54543709237215db38dc4064ce6710f6e974d79/tests/models/test_typed_row.py#L63
fix common_voice: See https://github.com/huggingface/datasets-preview-backend/blob/f54543709237215db38dc4064ce6710f6e974d79/tests/models/test_row.py#L65 and https://github.com/huggingface/datasets-preview-backend/blob/f54543709237215db38dc4064ce6710f6e974d79/tests/models/test_typed_row.py#L63
closed
2022-02-22T10:05:11Z
2022-05-11T15:39:43Z
2022-05-11T15:39:43Z
severo
1,146,697,577
refactor: πŸ’‘ use datasets' get_dataset_config_info() function
We also refactor to use the info as a DatasetInfo object, instead of serializing it to a dict (we historically did it to send it as a JSON in a dedicated endpoint). It impacts all the column related code. Also: disable two tests since common_voice has changed in some way that breaks the tests. Also: upgrade datasets to get the "get_dataset_config_info" function.
refactor: πŸ’‘ use datasets' get_dataset_config_info() function: We also refactor to use the info as a DatasetInfo object, instead of serializing it to a dict (we historically did it to send it as a JSON in a dedicated endpoint). It impacts all the column related code. Also: disable two tests since common_voice has changed in some way that breaks the tests. Also: upgrade datasets to get the "get_dataset_config_info" function.
closed
2022-02-22T10:02:36Z
2022-02-22T10:02:45Z
2022-02-22T10:02:45Z
severo
1,145,636,136
test: πŸ’ remove unused import
null
test: πŸ’ remove unused import:
closed
2022-02-21T10:55:00Z
2022-02-21T10:55:11Z
2022-02-21T10:55:10Z
severo
1,145,632,037
test: πŸ’ fix default value for headers
if None, create a new dict. Previous situation was that the default dictionary was created at function creation, then mutated on every call. Also: add traceback to exception raising. (all are suggestions by sourcery.ai in vscode)
test: πŸ’ fix default value for headers: if None, create a new dict. Previous situation was that the default dictionary was created at function creation, then mutated on every call. Also: add traceback to exception raising. (all are suggestions by sourcery.ai in vscode)
closed
2022-02-21T10:51:02Z
2022-02-21T10:51:08Z
2022-02-21T10:51:08Z
severo
1,145,608,830
feat: 🎸 force all the rows to have the same set of columns
It reverts https://github.com/huggingface/datasets-preview-backend/pull/145, as discussed in https://github.com/huggingface/datasets/issues/3738 and https://github.com/huggingface/datasets-preview-backend/issues/144. This means that https://huggingface.co/datasets/huggingface/transformers-metadata is assumed to fail at showing the dataset viewer, since the files are concatenated by the `datasets` library, but they have a different set of columns (or fields).
feat: 🎸 force all the rows to have the same set of columns: It reverts https://github.com/huggingface/datasets-preview-backend/pull/145, as discussed in https://github.com/huggingface/datasets/issues/3738 and https://github.com/huggingface/datasets-preview-backend/issues/144. This means that https://huggingface.co/datasets/huggingface/transformers-metadata is assumed to fail at showing the dataset viewer, since the files are concatenated by the `datasets` library, but they have a different set of columns (or fields).
closed
2022-02-21T10:29:01Z
2022-02-21T10:29:09Z
2022-02-21T10:29:08Z
severo
1,143,680,235
test: πŸ’ fix test on head_qa
Due to https://github.com/huggingface/datasets/issues/3758, head_qa, config "en" is not available and makes the tests fail. The "es" is working, so, let's use it instead. Fixes #147
test: πŸ’ fix test on head_qa: Due to https://github.com/huggingface/datasets/issues/3758, head_qa, config "en" is not available and makes the tests fail. The "es" is working, so, let's use it instead. Fixes #147
closed
2022-02-18T19:56:54Z
2022-02-18T20:26:41Z
2022-02-18T20:26:41Z
severo
1,143,099,831
feat: 🎸 Add an endpoint to get only the current queue jobs
ie: started and waiting. Because the queue can contain a lot of past jobs we're not interested in looking at, and take a lot of time + weight. The implementation is a bit ugly.
feat: 🎸 Add an endpoint to get only the current queue jobs: ie: started and waiting. Because the queue can contain a lot of past jobs we're not interested in looking at, and take a lot of time + weight. The implementation is a bit ugly.
closed
2022-02-18T13:51:33Z
2022-02-18T13:54:44Z
2022-02-18T13:54:43Z
severo
1,139,024,954
Fix CI
https://github.com/huggingface/datasets-preview-backend/runs/5204605847?check_suite_focus=true
Fix CI: https://github.com/huggingface/datasets-preview-backend/runs/5204605847?check_suite_focus=true
closed
2022-02-15T17:53:18Z
2022-02-18T20:26:40Z
2022-02-18T20:26:40Z
severo
1,139,013,635
fix: πŸ› preserve order of the columns when infered from the rows
null
fix: πŸ› preserve order of the columns when infered from the rows:
closed
2022-02-15T17:41:29Z
2022-02-15T17:41:53Z
2022-02-15T17:41:52Z
severo
1,138,997,777
Allow missing columns
fixes https://github.com/huggingface/datasets-preview-backend/issues/144
Allow missing columns: fixes https://github.com/huggingface/datasets-preview-backend/issues/144
closed
2022-02-15T17:29:10Z
2022-02-15T17:31:05Z
2022-02-15T17:31:04Z
severo
1,138,893,218
weird error when fetching the rows
https://huggingface.co/datasets/huggingface/transformers-metadata ``` Message: could not get the config name for this dataset ``` while https://huggingface.co/datasets/huggingface/transformers-metadata already has the config and split names <img width="899" alt="Capture d’écran 2022-02-15 aΜ€ 17 03 22" src="https://user-images.githubusercontent.com/1676121/154100608-57ec042f-1ee1-4a52-88a3-abbf3cae4eed.png">
weird error when fetching the rows: https://huggingface.co/datasets/huggingface/transformers-metadata ``` Message: could not get the config name for this dataset ``` while https://huggingface.co/datasets/huggingface/transformers-metadata already has the config and split names <img width="899" alt="Capture d’écran 2022-02-15 aΜ€ 17 03 22" src="https://user-images.githubusercontent.com/1676121/154100608-57ec042f-1ee1-4a52-88a3-abbf3cae4eed.png">
closed
2022-02-15T16:03:32Z
2022-02-21T10:29:25Z
2022-02-21T10:29:25Z
severo
1,138,880,331
upgrade datasets and remove pandas dependencies
Once https://github.com/huggingface/datasets/pull/3726 is be merged, we'll be able to remove the dependency to pandas we introduced in https://github.com/huggingface/datasets-preview-backend/commit/0f83b5cbb415175034cc0a93fa7cbb4774e11d68
upgrade datasets and remove pandas dependencies: Once https://github.com/huggingface/datasets/pull/3726 is be merged, we'll be able to remove the dependency to pandas we introduced in https://github.com/huggingface/datasets-preview-backend/commit/0f83b5cbb415175034cc0a93fa7cbb4774e11d68
closed
2022-02-15T15:53:10Z
2022-02-22T10:14:00Z
2022-02-22T10:14:00Z
severo
1,138,854,087
feat: 🎸 upgrade datasets and pin pandas to <1.4
Related to https://github.com/huggingface/datasets/issues/3724. Fixes "lvwerra/red-wine".
feat: 🎸 upgrade datasets and pin pandas to <1.4: Related to https://github.com/huggingface/datasets/issues/3724. Fixes "lvwerra/red-wine".
closed
2022-02-15T15:34:19Z
2022-02-15T15:34:31Z
2022-02-15T15:34:31Z
severo
1,132,835,823
ci: 🎑 apt update before installing packages
null
ci: 🎑 apt update before installing packages:
closed
2022-02-11T16:37:55Z
2022-02-11T16:47:36Z
2022-02-11T16:47:36Z
severo
1,132,818,772
feat: 🎸 change the meaning of a "valid" dataset
Now: a dataset is considered valid if "at least one split" is valid. BREAKING CHANGE: 🧨 /valid and /is-valid consider as valid the datasets with at least one valid split (before: all the splits were required to be valid) Fixes https://github.com/huggingface/datasets-preview-backend/issues/139
feat: 🎸 change the meaning of a "valid" dataset: Now: a dataset is considered valid if "at least one split" is valid. BREAKING CHANGE: 🧨 /valid and /is-valid consider as valid the datasets with at least one valid split (before: all the splits were required to be valid) Fixes https://github.com/huggingface/datasets-preview-backend/issues/139
closed
2022-02-11T16:23:41Z
2022-02-11T16:23:58Z
2022-02-11T16:23:57Z
severo
1,132,808,807
Consider a dataset as valid if at least one split is valid
see https://github.com/huggingface/moon-landing/issues/2086
Consider a dataset as valid if at least one split is valid: see https://github.com/huggingface/moon-landing/issues/2086
closed
2022-02-11T16:15:12Z
2022-02-11T16:23:57Z
2022-02-11T16:23:57Z
severo
1,127,782,154
feat: 🎸 upgrade dependencies
in particular, black to stable version, and tensorflow to 2.8.0
feat: 🎸 upgrade dependencies: in particular, black to stable version, and tensorflow to 2.8.0
closed
2022-02-08T21:35:20Z
2022-02-09T07:43:47Z
2022-02-09T07:43:46Z
severo
1,124,342,985
Add field to splits
null
Add field to splits:
closed
2022-02-04T15:43:50Z
2022-02-04T15:44:19Z
2022-02-04T15:44:18Z
severo
1,123,987,455
feat: 🎸 add an endpoint to get the cache status of HF datasets
null
feat: 🎸 add an endpoint to get the cache status of HF datasets:
closed
2022-02-04T09:27:34Z
2022-02-04T09:27:50Z
2022-02-04T09:27:49Z
severo
1,123,333,043
feat: 🎸 remove the SplitsNotFoundError exception
it is not informative, and hides the real problem
feat: 🎸 remove the SplitsNotFoundError exception: it is not informative, and hides the real problem
closed
2022-02-03T16:59:25Z
2022-02-03T17:10:58Z
2022-02-03T17:10:57Z
severo
1,122,941,775
feat: 🎸 add /is-valid endpoint
fixes https://github.com/huggingface/datasets-preview-backend/issues/129
feat: 🎸 add /is-valid endpoint: fixes https://github.com/huggingface/datasets-preview-backend/issues/129
closed
2022-02-03T11:00:22Z
2022-02-03T11:00:30Z
2022-02-03T11:00:29Z
severo
1,122,167,843
feat: 🎸 upgrade datasets to 1.18.3
It will fix an error when retrieving splits. See https://github.com/huggingface/datasets/pull/3657
feat: 🎸 upgrade datasets to 1.18.3: It will fix an error when retrieving splits. See https://github.com/huggingface/datasets/pull/3657
closed
2022-02-02T16:56:09Z
2022-02-02T16:56:36Z
2022-02-02T16:56:36Z
severo
1,122,160,298
upgrade dependencies
February release of `safety` includes new warnings: ``` +==============================================================================+ | | | /$$$$$$ /$$ | | /$$__ $$ | $$ | | /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ | | /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ | | | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ | | \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ | | /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ | | |_______/ \_______/|__/ \_______/ \___/ \____ $$ | | /$$ | $$ | | | $$$$$$/ | | by pyup.io \______/ | | | +==============================================================================+ | REPORT | | checked 198 packages, using free DB (updated once a month) | +============================+===========+==========================+==========+ | package | installed | affected | ID | +============================+===========+==========================+==========+ | py7zr | 0.16.4 | <0.17.3 | 44652 | | pillow | 8.4.0 | <9.0.0 | 44487 | | pillow | 8.4.0 | <9.0.0 | 44485 | | pillow | 8.4.0 | <9.0.0 | 44524 | | pillow | 8.4.0 | <9.0.0 | 44525 | | pillow | 8.4.0 | <9.0.0 | 44486 | | numpy | 1.19.5 | <1.21.0 | 43453 | | numpy | 1.19.5 | <1.22.0 | 44716 | | numpy | 1.19.5 | <1.22.0 | 44717 | | numpy | 1.19.5 | >0 | 44715 | +==============================================================================+ ``` We should upgrade the packages if possible
upgrade dependencies: February release of `safety` includes new warnings: ``` +==============================================================================+ | | | /$$$$$$ /$$ | | /$$__ $$ | $$ | | /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ | | /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ | | | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ | | \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ | | /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ | | |_______/ \_______/|__/ \_______/ \___/ \____ $$ | | /$$ | $$ | | | $$$$$$/ | | by pyup.io \______/ | | | +==============================================================================+ | REPORT | | checked 198 packages, using free DB (updated once a month) | +============================+===========+==========================+==========+ | package | installed | affected | ID | +============================+===========+==========================+==========+ | py7zr | 0.16.4 | <0.17.3 | 44652 | | pillow | 8.4.0 | <9.0.0 | 44487 | | pillow | 8.4.0 | <9.0.0 | 44485 | | pillow | 8.4.0 | <9.0.0 | 44524 | | pillow | 8.4.0 | <9.0.0 | 44525 | | pillow | 8.4.0 | <9.0.0 | 44486 | | numpy | 1.19.5 | <1.21.0 | 43453 | | numpy | 1.19.5 | <1.22.0 | 44716 | | numpy | 1.19.5 | <1.22.0 | 44717 | | numpy | 1.19.5 | >0 | 44715 | +==============================================================================+ ``` We should upgrade the packages if possible
closed
2022-02-02T16:48:52Z
2022-06-17T12:51:27Z
2022-06-17T12:51:27Z
severo
1,122,012,368
Add two fields to /splits
(edit) add fields `num_bytes` and `num_examples` to the `/splits` response. The data comes from DatasetInfo, or `null` (see https://github.com/huggingface/datasets-preview-backend#splits) --- <strike>Being able to retrieve a dataset infos via an HTTP API will be valuable for AutoNLP</strike>
Add two fields to /splits: (edit) add fields `num_bytes` and `num_examples` to the `/splits` response. The data comes from DatasetInfo, or `null` (see https://github.com/huggingface/datasets-preview-backend#splits) --- <strike>Being able to retrieve a dataset infos via an HTTP API will be valuable for AutoNLP</strike>
closed
2022-02-02T14:43:34Z
2022-02-04T15:44:19Z
2022-02-04T15:44:19Z
SBrandeis
1,122,000,761
Upgrade datasets to 1.18.3
See https://github.com/huggingface/datasets/releases/tag/1.18.3 > Extend dataset builder for streaming in get_dataset_split_names by @mariosasko in https://github.com/huggingface/datasets/pull/3657
Upgrade datasets to 1.18.3: See https://github.com/huggingface/datasets/releases/tag/1.18.3 > Extend dataset builder for streaming in get_dataset_split_names by @mariosasko in https://github.com/huggingface/datasets/pull/3657
closed
2022-02-02T14:34:16Z
2022-02-03T22:47:37Z
2022-02-02T16:56:46Z
severo
1,119,921,394
Create a new endpoint to check the validity of a specific dataset
Instead of requesting /valid which returns a list of all the valid datasets `{valid: ['glue', ...], created_at: "..."}`, we just want to request `GET https://.../valid?dataset=glue` and get `{dataset: 'glue', valid: true}` for example. task: find the best way to name this new endpoint, and possibly change the name of the existing endpoint to be more RESTful.
Create a new endpoint to check the validity of a specific dataset: Instead of requesting /valid which returns a list of all the valid datasets `{valid: ['glue', ...], created_at: "..."}`, we just want to request `GET https://.../valid?dataset=glue` and get `{dataset: 'glue', valid: true}` for example. task: find the best way to name this new endpoint, and possibly change the name of the existing endpoint to be more RESTful.
closed
2022-01-31T20:54:44Z
2022-02-03T11:02:02Z
2022-02-03T11:00:29Z
severo
1,119,832,511
Don't expand viewer unless the user clicks `Go to dataset viewer`
Currently, if the user picks a subset or a split in the viewer, the user is automatically redirected to the viewer page. This should only happen if the user clicks the `Go to dataset viewer` button. Otherwise, the viewer should be updated "in-place".
Don't expand viewer unless the user clicks `Go to dataset viewer`: Currently, if the user picks a subset or a split in the viewer, the user is automatically redirected to the viewer page. This should only happen if the user clicks the `Go to dataset viewer` button. Otherwise, the viewer should be updated "in-place".
closed
2022-01-31T19:15:14Z
2022-09-16T20:05:10Z
2022-09-16T20:05:09Z
mariosasko
1,117,757,123
feat: 🎸 reduce cache duration + instruction for nginx cache
null
feat: 🎸 reduce cache duration + instruction for nginx cache:
closed
2022-01-28T19:23:31Z
2022-01-28T19:23:44Z
2022-01-28T19:23:43Z
severo
1,117,695,360
feat: 🎸 upgrade datasets to 1.18.2
null
feat: 🎸 upgrade datasets to 1.18.2:
closed
2022-01-28T18:08:07Z
2022-01-28T18:08:17Z
2022-01-28T18:08:17Z
severo
1,117,668,413
Upgrade datasets to 1.18.2
null
Upgrade datasets to 1.18.2:
closed
2022-01-28T17:39:15Z
2022-01-28T19:38:14Z
2022-01-28T19:38:14Z
severo
1,117,667,306
Cache /valid?
<strike>It is called multiple times per second by moon landing, and it impacts a lot the loading time of the /datasets page (https://github.com/huggingface/moon-landing/issues/1871#issuecomment-1024414854).</strike> Currently, several queries are done to check all the valid datasets on every request
Cache /valid?: <strike>It is called multiple times per second by moon landing, and it impacts a lot the loading time of the /datasets page (https://github.com/huggingface/moon-landing/issues/1871#issuecomment-1024414854).</strike> Currently, several queries are done to check all the valid datasets on every request
closed
2022-01-28T17:37:47Z
2022-01-31T20:31:41Z
2022-01-28T19:32:06Z
severo
1,115,116,719
feat: 🎸 upgrade datasets to 1.18.1
null
feat: 🎸 upgrade datasets to 1.18.1:
closed
2022-01-26T14:58:50Z
2022-01-26T14:58:57Z
2022-01-26T14:58:57Z
severo
1,115,095,763
upgrade datasets to 1.18.1
https://github.com/huggingface/datasets/releases/tag/1.18.1
upgrade datasets to 1.18.1: https://github.com/huggingface/datasets/releases/tag/1.18.1
closed
2022-01-26T14:41:59Z
2022-01-26T15:26:45Z
2022-01-26T15:26:45Z
severo
1,115,093,034
Check if the cache should be refreshed when receiving a webhook
For example, we generally don't want to refresh all the canonical datasets when they get a new tag (see https://github.com/huggingface/moon-landing/issues/1925)
Check if the cache should be refreshed when receiving a webhook: For example, we generally don't want to refresh all the canonical datasets when they get a new tag (see https://github.com/huggingface/moon-landing/issues/1925)
closed
2022-01-26T14:39:25Z
2022-09-19T08:50:39Z
2022-09-19T08:50:39Z
severo
1,115,067,223
fix: πŸ› the blocked datasets must be blocked at cache invalidation
They were blocked on user requests, instead of at cache invalidation, which is the purpose of blocking them (not overwhelm the server with memory overflow, see https://github.com/huggingface/datasets-preview-backend/issues/91#issuecomment-952749773
fix: πŸ› the blocked datasets must be blocked at cache invalidation: They were blocked on user requests, instead of at cache invalidation, which is the purpose of blocking them (not overwhelm the server with memory overflow, see https://github.com/huggingface/datasets-preview-backend/issues/91#issuecomment-952749773
closed
2022-01-26T14:15:42Z
2022-01-26T14:15:53Z
2022-01-26T14:15:52Z
severo
1,115,025,989
Use last revision for canonical datasets
null
Use last revision for canonical datasets:
closed
2022-01-26T13:34:16Z
2022-01-26T13:42:04Z
2022-01-26T13:42:04Z
severo
1,110,685,139
upgrade to datasets 1.18
https://github.com/huggingface/datasets/releases/tag/1.18.0
upgrade to datasets 1.18: https://github.com/huggingface/datasets/releases/tag/1.18.0
closed
2022-01-21T16:48:38Z
2022-01-26T13:42:05Z
2022-01-26T13:42:05Z
severo
1,110,680,963
Add new datatypes
See https://github.com/huggingface/datasets/pull/3591 time, date, duration, and decimal
Add new datatypes: See https://github.com/huggingface/datasets/pull/3591 time, date, duration, and decimal
closed
2022-01-21T16:43:47Z
2022-06-17T12:50:49Z
2022-06-17T12:50:49Z
severo
1,108,433,052
feat: 🎸 change the logic: decouple the /splits and /rows
Now, we have jobs to get all the split names of a dataset, and other jobs to get the rows and columns of a given split. This allows to fill the cache progressively, and supports datasets with part of the splits being erroneous. BREAKING CHANGE: 🧨 the endpoints have changed, and the mongo database too
feat: 🎸 change the logic: decouple the /splits and /rows: Now, we have jobs to get all the split names of a dataset, and other jobs to get the rows and columns of a given split. This allows to fill the cache progressively, and supports datasets with part of the splits being erroneous. BREAKING CHANGE: 🧨 the endpoints have changed, and the mongo database too
closed
2022-01-19T18:38:22Z
2022-01-20T15:11:03Z
2022-01-20T15:11:03Z
severo
1,101,477,224
Allow partial datasets
For the big datasets, it might be hard to get the rows for all the configs (common voice has 155 configs, one per language, and for each of their 5 splits, we have to download 100 audio files -> we get banned quickly). An improvement would be to refresh the cache on a config basis, instead of a dataset basis. Possibly the atomicity could even be the split, instead of the config. Doing this, we might lose the coherence of the dataset, and we increase the complexity of the state, but it would give a solution for the big datasets, that we expect to have more and more (audio, images). It has implications on moonlanding too, because the errors would be per config, or per split, instead of per dataset.
Allow partial datasets: For the big datasets, it might be hard to get the rows for all the configs (common voice has 155 configs, one per language, and for each of their 5 splits, we have to download 100 audio files -> we get banned quickly). An improvement would be to refresh the cache on a config basis, instead of a dataset basis. Possibly the atomicity could even be the split, instead of the config. Doing this, we might lose the coherence of the dataset, and we increase the complexity of the state, but it would give a solution for the big datasets, that we expect to have more and more (audio, images). It has implications on moonlanding too, because the errors would be per config, or per split, instead of per dataset.
closed
2022-01-13T10:05:55Z
2022-01-21T15:12:19Z
2022-01-21T15:12:19Z
severo
1,100,750,506
feat: 🎸 upgrade dependencies, and set python to fixed 3.9.6
null
feat: 🎸 upgrade dependencies, and set python to fixed 3.9.6:
closed
2022-01-12T20:15:48Z
2022-01-12T20:25:58Z
2022-01-12T20:25:57Z
severo
1,100,738,865
Retry
null
Retry:
closed
2022-01-12T20:02:16Z
2022-01-12T20:02:23Z
2022-01-12T20:02:22Z
severo
1,100,067,380
fix: πŸ› limit fallback to datasets under 100MB, not 100GB
null
fix: πŸ› limit fallback to datasets under 100MB, not 100GB:
closed
2022-01-12T09:03:44Z
2022-01-12T09:03:51Z
2022-01-12T09:03:50Z
severo
1,099,485,663
feat: 🎸 envvar to fallback to normal mode if streaming fails
It fallbacks if the dataset size is under a limit
feat: 🎸 envvar to fallback to normal mode if streaming fails: It fallbacks if the dataset size is under a limit
closed
2022-01-11T18:19:26Z
2022-01-11T18:19:33Z
2022-01-11T18:19:33Z
severo
1,099,366,668
Simplify errors
null
Simplify errors:
closed
2022-01-11T16:18:55Z
2022-01-11T16:20:12Z
2022-01-11T16:20:11Z
severo
1,097,773,579
feat: 🎸 upgrade datasets
Fixes issue with splits for gated datasets. See https://github.com/huggingface/datasets/pull/3545.
feat: 🎸 upgrade datasets: Fixes issue with splits for gated datasets. See https://github.com/huggingface/datasets/pull/3545.
closed
2022-01-10T11:16:53Z
2022-01-10T11:17:02Z
2022-01-10T11:17:01Z
severo
1,094,774,548
feat: 🎸 get data from gated datasets
null
feat: 🎸 get data from gated datasets:
closed
2022-01-05T21:54:09Z
2022-01-06T14:52:52Z
2022-01-06T14:52:52Z
severo
1,093,735,085
feat: 🎸 add a script to refresh the whole cache
null
feat: 🎸 add a script to refresh the whole cache:
closed
2022-01-04T20:38:30Z
2022-01-04T20:39:05Z
2022-01-04T20:39:04Z
severo
1,093,710,500
Columns are wrongly detected as audio
https://huggingface.co/datasets/indonli <img width="792" alt="Capture d’écran 2022-01-04 aΜ€ 21 01 22" src="https://user-images.githubusercontent.com/1676121/148117033-ccca3db5-6200-4356-85d3-085ec5c6fd84.png"> https://datasets-preview.huggingface.tech/rows?dataset=indonli&config=indonli&split=test_expert <img width="396" alt="Capture d’écran 2022-01-04 aΜ€ 21 03 08" src="https://user-images.githubusercontent.com/1676121/148116982-a3b5029c-4f65-488a-84d8-6522cbc25b4b.png"> Thanks @albertvillanova for reporting
Columns are wrongly detected as audio: https://huggingface.co/datasets/indonli <img width="792" alt="Capture d’écran 2022-01-04 aΜ€ 21 01 22" src="https://user-images.githubusercontent.com/1676121/148117033-ccca3db5-6200-4356-85d3-085ec5c6fd84.png"> https://datasets-preview.huggingface.tech/rows?dataset=indonli&config=indonli&split=test_expert <img width="396" alt="Capture d’écran 2022-01-04 aΜ€ 21 03 08" src="https://user-images.githubusercontent.com/1676121/148116982-a3b5029c-4f65-488a-84d8-6522cbc25b4b.png"> Thanks @albertvillanova for reporting
closed
2022-01-04T20:03:22Z
2022-01-05T08:41:11Z
2022-01-04T20:30:50Z
severo
1,092,359,023
feat: 🎸 upgrade datasets
It should make "pib" streamable
feat: 🎸 upgrade datasets: It should make "pib" streamable
closed
2022-01-03T09:39:38Z
2022-01-03T09:53:30Z
2022-01-03T09:39:45Z
severo
1,088,411,077
feat: 🎸 upgrade datasets
See https://github.com/huggingface/datasets/pull/3478 and https://github.com/huggingface/datasets/pull/3476
feat: 🎸 upgrade datasets: See https://github.com/huggingface/datasets/pull/3478 and https://github.com/huggingface/datasets/pull/3476
closed
2021-12-24T14:40:55Z
2022-01-03T09:53:26Z
2021-12-24T14:41:01Z
severo
1,087,631,177
fix: πŸ› serve the asserts from nginx instead of starlette
Starlette cannot serve ranges for static files, see https://github.com/encode/starlette/issues/950. Also: fix the rights of the assets directory Also: add the CORS header (doc)
fix: πŸ› serve the asserts from nginx instead of starlette: Starlette cannot serve ranges for static files, see https://github.com/encode/starlette/issues/950. Also: fix the rights of the assets directory Also: add the CORS header (doc)
closed
2021-12-23T11:29:20Z
2022-01-03T09:53:19Z
2021-12-23T11:29:27Z
severo
1,087,029,231
feat: 🎸 support beans and cats_vs_dogs
See https://github.com/huggingface/datasets/pull/3472
feat: 🎸 support beans and cats_vs_dogs: See https://github.com/huggingface/datasets/pull/3472
closed
2021-12-22T17:17:25Z
2022-01-03T09:53:46Z
2021-12-22T17:17:36Z
severo
1,086,042,496
the order of the columns is not preserved in the cache
null
the order of the columns is not preserved in the cache:
closed
2021-12-21T17:08:38Z
2021-12-21T17:52:53Z
2021-12-21T17:52:53Z
severo
1,085,963,498
feat: 🎸 add Image column
null
feat: 🎸 add Image column:
closed
2021-12-21T15:40:55Z
2021-12-21T15:48:49Z
2021-12-21T15:48:48Z
severo
1,085,962,743
Provide multiple sizes for the images
Maybe use https://cloudinary.com/documentation/resizing_and_cropping. At least provide a thumbnail and the original size, which will allow using https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture to separate: the small images in the table from the original image which is shown when clicking on a thumbnail.
Provide multiple sizes for the images: Maybe use https://cloudinary.com/documentation/resizing_and_cropping. At least provide a thumbnail and the original size, which will allow using https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture to separate: the small images in the table from the original image which is shown when clicking on a thumbnail.
closed
2021-12-21T15:40:08Z
2024-06-19T14:00:21Z
2024-06-19T14:00:21Z
severo
1,085,710,483
feat: 🎸 upgrade datasets to master (1.16.2.dev0)
Note: later today, 1.16.2 will be release, so a new upgrade will be required
feat: 🎸 upgrade datasets to master (1.16.2.dev0): Note: later today, 1.16.2 will be release, so a new upgrade will be required
closed
2021-12-21T11:04:49Z
2021-12-21T11:05:27Z
2021-12-21T11:05:27Z
severo
1,083,535,580
feat: 🎸 add audio column
<img width="1077" alt="Capture d’écran 2021-12-17 aΜ€ 17 56 27" src="https://user-images.githubusercontent.com/1676121/146594420-1efc2a82-9aa6-4b24-851e-f36a58a03135.png">
feat: 🎸 add audio column: <img width="1077" alt="Capture d’écran 2021-12-17 aΜ€ 17 56 27" src="https://user-images.githubusercontent.com/1676121/146594420-1efc2a82-9aa6-4b24-851e-f36a58a03135.png">
closed
2021-12-17T18:53:19Z
2021-12-20T17:22:16Z
2021-12-20T17:22:15Z
severo
1,079,861,589
cache refresh does not work
``` 3|datasets | DEBUG: 2021-12-14 14:52:48,994 - worker - job assigned: 61b8afbff18404b0ff26d942 for dataset: Babelscape/wikineural 3|datasets | INFO: 2021-12-14 14:52:48,994 - worker - compute dataset 'Babelscape/wikineural' 3|datasets | DEBUG: 2021-12-14 14:52:50,391 - worker - job finished: 61b8afbff18404b0ff26d942 for dataset: Babelscape/wikineural 3|datasets | Traceback (most recent call last): 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 407, in save 3|datasets | object_id = self._save_create(doc, force_insert, write_concern) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 472, in _save_create 3|datasets | object_id = wc_collection.insert_one(doc).inserted_id 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 705, in insert_one 3|datasets | self._insert(document, 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 620, in _insert 3|datasets | return self._insert_one( 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 609, in _insert_one 3|datasets | self.__database.client._retryable_write( 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1552, in _retryable_write 3|datasets | return self._retry_with_session(retryable, func, s, None) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1438, in _retry_with_session 3|datasets | return self._retry_internal(retryable, func, session, bulk) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1470, in _retry_internal 3|datasets | return func(session, sock_info, retryable) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 607, in _insert_command 3|datasets | _check_write_command_response(result) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/helpers.py", line 241, in _check_write_command_response 3|datasets | _raise_last_write_error(write_errors) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/helpers.py", line 210, in _raise_last_write_error 3|datasets | raise DuplicateKeyError(error.get("errmsg"), 11000, error) 3|datasets | pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1}, 'keyValue': {'dataset_name': 'Babelscape/wikineural'}, 'errmsg': 'E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }'} 3|datasets | 3|datasets | During handling of the above exception, another exception occurred: 3|datasets | 3|datasets | Traceback (most recent call last): 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 92, in <module> 3|datasets | loop() 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 81, in loop 3|datasets | while not has_resources() or not process_next_job(): 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 43, in process_next_job 3|datasets | refresh_dataset(dataset_name=dataset_name) 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/cache.py", line 239, in refresh_dataset 3|datasets | upsert_dataset(dataset) 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/cache.py", line 186, in upsert_dataset 3|datasets | DbDataset(dataset_name=dataset_name, status="valid").save() 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 430, in save 3|datasets | raise NotUniqueError(message % err) 3|datasets | mongoengine.errors.NotUniqueError: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1}, 'keyValue': {'dataset_name': 'Babelscape/wikineural'}, 'errmsg': 'E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }'}) 3|datasets | make: *** [Makefile:38: worker] Error 1 ```
cache refresh does not work: ``` 3|datasets | DEBUG: 2021-12-14 14:52:48,994 - worker - job assigned: 61b8afbff18404b0ff26d942 for dataset: Babelscape/wikineural 3|datasets | INFO: 2021-12-14 14:52:48,994 - worker - compute dataset 'Babelscape/wikineural' 3|datasets | DEBUG: 2021-12-14 14:52:50,391 - worker - job finished: 61b8afbff18404b0ff26d942 for dataset: Babelscape/wikineural 3|datasets | Traceback (most recent call last): 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 407, in save 3|datasets | object_id = self._save_create(doc, force_insert, write_concern) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 472, in _save_create 3|datasets | object_id = wc_collection.insert_one(doc).inserted_id 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 705, in insert_one 3|datasets | self._insert(document, 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 620, in _insert 3|datasets | return self._insert_one( 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 609, in _insert_one 3|datasets | self.__database.client._retryable_write( 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1552, in _retryable_write 3|datasets | return self._retry_with_session(retryable, func, s, None) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1438, in _retry_with_session 3|datasets | return self._retry_internal(retryable, func, session, bulk) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1470, in _retry_internal 3|datasets | return func(session, sock_info, retryable) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 607, in _insert_command 3|datasets | _check_write_command_response(result) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/helpers.py", line 241, in _check_write_command_response 3|datasets | _raise_last_write_error(write_errors) 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/helpers.py", line 210, in _raise_last_write_error 3|datasets | raise DuplicateKeyError(error.get("errmsg"), 11000, error) 3|datasets | pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1}, 'keyValue': {'dataset_name': 'Babelscape/wikineural'}, 'errmsg': 'E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }'} 3|datasets | 3|datasets | During handling of the above exception, another exception occurred: 3|datasets | 3|datasets | Traceback (most recent call last): 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 92, in <module> 3|datasets | loop() 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 81, in loop 3|datasets | while not has_resources() or not process_next_job(): 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 43, in process_next_job 3|datasets | refresh_dataset(dataset_name=dataset_name) 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/cache.py", line 239, in refresh_dataset 3|datasets | upsert_dataset(dataset) 3|datasets | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/cache.py", line 186, in upsert_dataset 3|datasets | DbDataset(dataset_name=dataset_name, status="valid").save() 3|datasets | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 430, in save 3|datasets | raise NotUniqueError(message % err) 3|datasets | mongoengine.errors.NotUniqueError: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'dataset_name': 1}, 'keyValue': {'dataset_name': 'Babelscape/wikineural'}, 'errmsg': 'E11000 duplicate key error collection: datasets_preview_cache.datasets index: dataset_name_1 dup key: { dataset_name: "Babelscape/wikineural" }'}) 3|datasets | make: *** [Makefile:38: worker] Error 1 ```
closed
2021-12-14T14:58:35Z
2021-12-14T16:27:06Z
2021-12-14T16:27:06Z
severo
1,079,814,673
Add auth to the technical endpoints
endpoints like /cache-reports, or /queue-flush, are freely available <strike>and contain information about the private datasets</strike> (no, they don't). They should be behind some authentication
Add auth to the technical endpoints: endpoints like /cache-reports, or /queue-flush, are freely available <strike>and contain information about the private datasets</strike> (no, they don't). They should be behind some authentication
closed
2021-12-14T14:18:08Z
2022-09-07T07:37:39Z
2022-09-07T07:37:38Z
severo
1,078,689,575
Multiple cache collections
null
Multiple cache collections:
closed
2021-12-13T15:48:28Z
2021-12-13T16:10:38Z
2021-12-13T16:10:37Z
severo
1,039,322,608
feat: 🎸 add /queue-dump endpoint
null
feat: 🎸 add /queue-dump endpoint:
closed
2021-10-29T08:41:53Z
2021-10-29T08:49:43Z
2021-10-29T08:49:42Z
severo
1,037,144,253
Server error raised in webhook in the app
``` 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,092 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,118 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,118 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,125 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,202 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,205 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,350 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,355 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error ```
Server error raised in webhook in the app: ``` 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,092 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,118 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,118 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,125 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,202 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,205 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 200 OK 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,350 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | INFO: 2021-10-27 08:44:44,355 - datasets_preview_backend.routes.webhook - /webhook: {'add': 'datasets/yuvalkirstain/asset'} 0|datasets-preview-backend | ERROR: Exception in ASGI application 0|datasets-preview-backend | Traceback (most recent call last): 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 369, in run_asgi 0|datasets-preview-backend | result = await app(self.scope, self.receive, self.send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__ 0|datasets-preview-backend | return await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ 0|datasets-preview-backend | await self.middleware_stack(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, _send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 23, in __call__ 0|datasets-preview-backend | await responder(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 42, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, self.send_with_gzip) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ 0|datasets-preview-backend | raise exc 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ 0|datasets-preview-backend | await self.app(scope, receive, sender) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ 0|datasets-preview-backend | await route.handle(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle 0|datasets-preview-backend | await self.app(scope, receive, send) 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/starlette/routing.py", line 61, in app 0|datasets-preview-backend | response = await func(request) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 79, in webhook_endpoint 0|datasets-preview-backend | process_payload(payload) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 61, in process_payload 0|datasets-preview-backend | try_to_update(payload["add"]) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/routes/webhook.py", line 50, in try_to_update 0|datasets-preview-backend | add_job(dataset_name) 0|datasets-preview-backend | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 89, in add_job 0|datasets-preview-backend | Job.objects(dataset_name=dataset_name, finished_at=None).get() 0|datasets-preview-backend | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 278, in get 0|datasets-preview-backend | raise queryset._document.MultipleObjectsReturned( 0|datasets-preview-backend | datasets_preview_backend.io.queue.MultipleObjectsReturned: 2 or more items returned, instead of 1 0|datasets-preview-backend | INFO: 172.30.0.55:0 - "POST /webhook HTTP/1.1" 500 Internal Server Error ```
closed
2021-10-27T08:46:49Z
2022-06-17T12:48:38Z
2022-06-17T12:48:37Z
severo
1,037,081,672
Stalled jobs in the queue
https://datasets-preview.huggingface.tech/queue is returning: ``` {"waiting":48,"started":6,"done":2211,"created_at":"2021-10-27T07:40:33Z"} ``` But there are only 3 workers in production, so the max acceptable number of `started` jobs should be 3.
Stalled jobs in the queue: https://datasets-preview.huggingface.tech/queue is returning: ``` {"waiting":48,"started":6,"done":2211,"created_at":"2021-10-27T07:40:33Z"} ``` But there are only 3 workers in production, so the max acceptable number of `started` jobs should be 3.
closed
2021-10-27T07:41:39Z
2022-06-17T11:55:39Z
2022-06-17T11:55:39Z
severo
1,037,079,843
JobNotFound kills the worker
``` 4|datasets-preview-backend-worker | DEBUG: 2021-10-27 07:34:30,558 - worker - dataset 'billsum' had error, cache updated 4|datasets-preview-backend-worker | Traceback (most recent call last): 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 266, in get 4|datasets-preview-backend-worker | result = next(queryset) 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 1572, in __next__ 4|datasets-preview-backend-worker | raw_doc = next(self._cursor) 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/cursor.py", line 1246, in next 4|datasets-preview-backend-worker | raise StopIteration 4|datasets-preview-backend-worker | StopIteration 4|datasets-preview-backend-worker | During handling of the above exception, another exception occurred: 4|datasets-preview-backend-worker | Traceback (most recent call last): 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 107, in finish_job 4|datasets-preview-backend-worker | job = Job.objects(id=job_id, started_at__exists=True, finished_at=None).get() 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 269, in get 4|datasets-preview-backend-worker | raise queryset._document.DoesNotExist(msg) 4|datasets-preview-backend-worker | datasets_preview_backend.io.queue.DoesNotExist: Job matching query does not exist. 4|datasets-preview-backend-worker | During handling of the above exception, another exception occurred: 4|datasets-preview-backend-worker | Traceback (most recent call last): 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 97, in <module> 4|datasets-preview-backend-worker | loop() 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 89, in loop 4|datasets-preview-backend-worker | process_next_job() 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 52, in process_next_job 4|datasets-preview-backend-worker | finish_job(job_id) 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 109, in finish_job 4|datasets-preview-backend-worker | raise JobNotFound("the job does not exist") 4|datasets-preview-backend-worker | datasets_preview_backend.io.queue.JobNotFound: the job does not exist ```
JobNotFound kills the worker: ``` 4|datasets-preview-backend-worker | DEBUG: 2021-10-27 07:34:30,558 - worker - dataset 'billsum' had error, cache updated 4|datasets-preview-backend-worker | Traceback (most recent call last): 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 266, in get 4|datasets-preview-backend-worker | result = next(queryset) 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 1572, in __next__ 4|datasets-preview-backend-worker | raw_doc = next(self._cursor) 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/cursor.py", line 1246, in next 4|datasets-preview-backend-worker | raise StopIteration 4|datasets-preview-backend-worker | StopIteration 4|datasets-preview-backend-worker | During handling of the above exception, another exception occurred: 4|datasets-preview-backend-worker | Traceback (most recent call last): 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 107, in finish_job 4|datasets-preview-backend-worker | job = Job.objects(id=job_id, started_at__exists=True, finished_at=None).get() 4|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 269, in get 4|datasets-preview-backend-worker | raise queryset._document.DoesNotExist(msg) 4|datasets-preview-backend-worker | datasets_preview_backend.io.queue.DoesNotExist: Job matching query does not exist. 4|datasets-preview-backend-worker | During handling of the above exception, another exception occurred: 4|datasets-preview-backend-worker | Traceback (most recent call last): 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 97, in <module> 4|datasets-preview-backend-worker | loop() 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 89, in loop 4|datasets-preview-backend-worker | process_next_job() 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 52, in process_next_job 4|datasets-preview-backend-worker | finish_job(job_id) 4|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/queue.py", line 109, in finish_job 4|datasets-preview-backend-worker | raise JobNotFound("the job does not exist") 4|datasets-preview-backend-worker | datasets_preview_backend.io.queue.JobNotFound: the job does not exist ```
closed
2021-10-27T07:39:49Z
2022-09-16T20:05:29Z
2022-09-16T20:05:29Z
severo
1,037,056,605
Big documents are not stored in the mongo db
``` 3|datasets-preview-backend-worker | DEBUG: 2021-10-27 07:08:33,384 - worker - job finished: 6178fae6d6a08142f3961667 for dataset: the_pile_books3 3|datasets-preview-backend-worker | Traceback (most recent call last): 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 97, in <module> 3|datasets-preview-backend-worker | loop() 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 89, in loop 3|datasets-preview-backend-worker | process_next_job() 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 46, in process_next_job 3|datasets-preview-backend-worker | upsert_dataset_cache(dataset_name, "valid", dataset) 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/cache.py", line 66, in upsert_dataset_cache 3|datasets-preview-backend-worker | DatasetCache(dataset_name=dataset_name, status=status, content=content).save() 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 407, in save 3|datasets-preview-backend-worker | object_id = self._save_create(doc, force_insert, write_concern) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 468, in _save_create 3|datasets-preview-backend-worker | raw_object = wc_collection.find_one_and_replace(select_dict, doc) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 3167, in find_one_and_replace 3|datasets-preview-backend-worker | return self.__find_and_modify(filter, projection, 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 3017, in __find_and_modify 3|datasets-preview-backend-worker | return self.__database.client._retryable_write( 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1552, in _retryable_write 3|datasets-preview-backend-worker | return self._retry_with_session(retryable, func, s, None) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1438, in _retry_with_session 3|datasets-preview-backend-worker | return self._retry_internal(retryable, func, session, bulk) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1470, in _retry_internal 3|datasets-preview-backend-worker | return func(session, sock_info, retryable) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 3007, in _find_and_modify 3|datasets-preview-backend-worker | out = self._command(sock_info, cmd, 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 238, in _command 3|datasets-preview-backend-worker | return sock_info.command( 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/pool.py", line 726, in command 3|datasets-preview-backend-worker | self._raise_connection_failure(error) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/pool.py", line 710, in command 3|datasets-preview-backend-worker | return command(self, dbname, spec, secondary_ok, 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/network.py", line 136, in command 3|datasets-preview-backend-worker | message._raise_document_too_large( 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/message.py", line 1140, in _raise_document_too_large 3|datasets-preview-backend-worker | raise DocumentTooLarge("%r command document too large" % (operation,)) 3|datasets-preview-backend-worker | pymongo.errors.DocumentTooLarge: 'findAndModify' command document too large ```
Big documents are not stored in the mongo db: ``` 3|datasets-preview-backend-worker | DEBUG: 2021-10-27 07:08:33,384 - worker - job finished: 6178fae6d6a08142f3961667 for dataset: the_pile_books3 3|datasets-preview-backend-worker | Traceback (most recent call last): 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 97, in <module> 3|datasets-preview-backend-worker | loop() 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 89, in loop 3|datasets-preview-backend-worker | process_next_job() 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/worker.py", line 46, in process_next_job 3|datasets-preview-backend-worker | upsert_dataset_cache(dataset_name, "valid", dataset) 3|datasets-preview-backend-worker | File "/home/hf/datasets-preview-backend/src/datasets_preview_backend/io/cache.py", line 66, in upsert_dataset_cache 3|datasets-preview-backend-worker | DatasetCache(dataset_name=dataset_name, status=status, content=content).save() 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 407, in save 3|datasets-preview-backend-worker | object_id = self._save_create(doc, force_insert, write_concern) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/mongoengine/document.py", line 468, in _save_create 3|datasets-preview-backend-worker | raw_object = wc_collection.find_one_and_replace(select_dict, doc) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 3167, in find_one_and_replace 3|datasets-preview-backend-worker | return self.__find_and_modify(filter, projection, 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 3017, in __find_and_modify 3|datasets-preview-backend-worker | return self.__database.client._retryable_write( 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1552, in _retryable_write 3|datasets-preview-backend-worker | return self._retry_with_session(retryable, func, s, None) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1438, in _retry_with_session 3|datasets-preview-backend-worker | return self._retry_internal(retryable, func, session, bulk) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1470, in _retry_internal 3|datasets-preview-backend-worker | return func(session, sock_info, retryable) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 3007, in _find_and_modify 3|datasets-preview-backend-worker | out = self._command(sock_info, cmd, 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/collection.py", line 238, in _command 3|datasets-preview-backend-worker | return sock_info.command( 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/pool.py", line 726, in command 3|datasets-preview-backend-worker | self._raise_connection_failure(error) 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/pool.py", line 710, in command 3|datasets-preview-backend-worker | return command(self, dbname, spec, secondary_ok, 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/network.py", line 136, in command 3|datasets-preview-backend-worker | message._raise_document_too_large( 3|datasets-preview-backend-worker | File "/home/hf/.cache/pypoetry/virtualenvs/datasets-preview-backend-CTHb2hp_-py3.9/lib/python3.9/site-packages/pymongo/message.py", line 1140, in _raise_document_too_large 3|datasets-preview-backend-worker | raise DocumentTooLarge("%r command document too large" % (operation,)) 3|datasets-preview-backend-worker | pymongo.errors.DocumentTooLarge: 'findAndModify' command document too large ```
closed
2021-10-27T07:11:34Z
2021-12-13T16:21:56Z
2021-12-13T16:10:38Z
severo
1,036,741,836
Queue
null
Queue:
closed
2021-10-26T21:07:41Z
2021-10-26T21:08:15Z
2021-10-26T21:08:15Z
severo
1,035,132,462
Use mongodb for the cache
null
Use mongodb for the cache:
closed
2021-10-25T13:20:35Z
2021-10-26T13:17:07Z
2021-10-26T13:17:07Z
severo
1,035,027,245
Support timestamp and date feature types
- [ ] add a date type - [ ] detect dates, ie `allenai/c4` -> `timestamp` , `curiosity_dialogs`
Support timestamp and date feature types: - [ ] add a date type - [ ] detect dates, ie `allenai/c4` -> `timestamp` , `curiosity_dialogs`
closed
2021-10-25T11:38:01Z
2023-10-16T15:40:44Z
2023-10-16T15:40:44Z
severo
1,034,760,196
Technical endpoints timeout
I get Gateway timeout (from the nginx proxy behind the uvicorn app) when trying to access: - https://datasets-preview.huggingface.tech/valid - https://datasets-preview.huggingface.tech/cache They loop over the cache of all the excepted datasets... It might be an issue with the cache (one entry is too large, some issue with concurrent access?)
Technical endpoints timeout: I get Gateway timeout (from the nginx proxy behind the uvicorn app) when trying to access: - https://datasets-preview.huggingface.tech/valid - https://datasets-preview.huggingface.tech/cache They loop over the cache of all the excepted datasets... It might be an issue with the cache (one entry is too large, some issue with concurrent access?)
closed
2021-10-25T07:09:19Z
2021-10-27T08:56:52Z
2021-10-27T08:56:52Z
severo
1,034,721,078
Application stuck
The app does not respond anymore. The server itself is not stuck, and there is a lot of swap left. https://betteruptime.com/team/14149/incidents/170400279?m=389098 <img width="1845" alt="Capture d’écran 2021-10-25 aΜ€ 08 11 46" src="https://user-images.githubusercontent.com/1676121/138643405-f71d5c6a-a6dd-45e6-add3-c6c8cd805540.png">
Application stuck: The app does not respond anymore. The server itself is not stuck, and there is a lot of swap left. https://betteruptime.com/team/14149/incidents/170400279?m=389098 <img width="1845" alt="Capture d’écran 2021-10-25 aΜ€ 08 11 46" src="https://user-images.githubusercontent.com/1676121/138643405-f71d5c6a-a6dd-45e6-add3-c6c8cd805540.png">
closed
2021-10-25T06:13:43Z
2021-10-27T08:56:25Z
2021-10-27T08:56:25Z
severo
1,033,787,801
Detect basic types from rows
null
Detect basic types from rows:
closed
2021-10-22T17:05:12Z
2021-10-22T17:10:31Z
2021-10-22T17:10:31Z
severo
1,033,352,060
Improve the heuristic to detect basic types
If "features" is missing in the dataset info, all the columns are set to `json`. We might try to guess the types from the first (or more) rows instead.
Improve the heuristic to detect basic types: If "features" is missing in the dataset info, all the columns are set to `json`. We might try to guess the types from the first (or more) rows instead.
closed
2021-10-22T09:06:09Z
2021-10-22T17:17:40Z
2021-10-22T17:15:59Z
severo
1,033,317,781
Change the architecture
Currently: the API server, the cache/database, the assets, and the workers (that generate the data) are running on the same machine and share the same resources, which is the source of various issues: - a worker that requires a lot of resources can block the server (https://grafana.huggingface.co/d/rYdddlPWk/node-exporter-full?orgId=2&refresh=1m&from=now-24h&to=now&var-DS_PROMETHEUS=HF%20Prometheus&var-job=node_exporter_metrics&var-node=datasets-preview-backend&var-diskdevices=%5Ba-z%5D%2B%7Cnvme%5B0-9%5D%2Bn%5B0-9%5D%2B) - we have to kill the warming process if memory usage is too high to preserve the API resources, which requires manual supervision - also related to resources limits: we currently run the warming and refreshing tasks on one dataset at a time, while they are logically independent and could be launched on different workers in parallel, reducing the duration of these processes - also: I'm not sure if the current implementation of the database/cache (diskcache) really supports concurrent access (it does, but I'm not sure I used it adequately in the code, see http://www.grantjenks.com/docs/diskcache/tutorial.html / `cache.close()`) - having everything in the same application also means that everything is developed in Python (since the workers have to be in Python), while managing a queue and async processes could be easier in node.js, for example The architecture I imagine would have these components: - API server - queue - database - file storage - workers The API server would: - deliver the data (`/rows`, `/splits`, `/valid`, `/cache-reports`, `/cache`, `/healthcheck`): directly querying the database. If not in the database, return an error. - serve the assets from the storage - command the queue (`/webhook`, `/warm`, `/refresh`) -> add authentication? Send new tasks to the queue The queue would: - manage the tasks sent by the API server - launch workers for these tasks - add/update/delete the data in the database and the assets in the storage The database would: - store the datasets' data The storage would: - store the assets (image files for example) The workers would: - compute the data for one dataset
Change the architecture: Currently: the API server, the cache/database, the assets, and the workers (that generate the data) are running on the same machine and share the same resources, which is the source of various issues: - a worker that requires a lot of resources can block the server (https://grafana.huggingface.co/d/rYdddlPWk/node-exporter-full?orgId=2&refresh=1m&from=now-24h&to=now&var-DS_PROMETHEUS=HF%20Prometheus&var-job=node_exporter_metrics&var-node=datasets-preview-backend&var-diskdevices=%5Ba-z%5D%2B%7Cnvme%5B0-9%5D%2Bn%5B0-9%5D%2B) - we have to kill the warming process if memory usage is too high to preserve the API resources, which requires manual supervision - also related to resources limits: we currently run the warming and refreshing tasks on one dataset at a time, while they are logically independent and could be launched on different workers in parallel, reducing the duration of these processes - also: I'm not sure if the current implementation of the database/cache (diskcache) really supports concurrent access (it does, but I'm not sure I used it adequately in the code, see http://www.grantjenks.com/docs/diskcache/tutorial.html / `cache.close()`) - having everything in the same application also means that everything is developed in Python (since the workers have to be in Python), while managing a queue and async processes could be easier in node.js, for example The architecture I imagine would have these components: - API server - queue - database - file storage - workers The API server would: - deliver the data (`/rows`, `/splits`, `/valid`, `/cache-reports`, `/cache`, `/healthcheck`): directly querying the database. If not in the database, return an error. - serve the assets from the storage - command the queue (`/webhook`, `/warm`, `/refresh`) -> add authentication? Send new tasks to the queue The queue would: - manage the tasks sent by the API server - launch workers for these tasks - add/update/delete the data in the database and the assets in the storage The database would: - store the datasets' data The storage would: - store the assets (image files for example) The workers would: - compute the data for one dataset
closed
2021-10-22T08:27:59Z
2022-05-11T15:10:47Z
2022-05-11T15:10:46Z
severo
1,032,662,443
Remove local information from the error messages
It might be a security issue
Remove local information from the error messages: It might be a security issue
closed
2021-10-21T15:55:09Z
2022-09-16T20:07:36Z
2022-09-16T20:07:36Z
severo
1,032,638,446
Impove the heuristic to detect image columns
See the closed lists of column names, in particular: https://github.com/huggingface/datasets-preview-backend/blob/master/src/datasets_preview_backend/models/column/image_array2d.py#L16 for example
Impove the heuristic to detect image columns: See the closed lists of column names, in particular: https://github.com/huggingface/datasets-preview-backend/blob/master/src/datasets_preview_backend/models/column/image_array2d.py#L16 for example
closed
2021-10-21T15:29:43Z
2022-01-10T16:50:46Z
2022-01-10T16:50:46Z
severo
1,032,453,807
Cannot get the config names for some datasets
They all generate this warning: ``` 1|datasets | INFO: 2021-10-21 12:41:10,003 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_1 - default - train 1|datasets | INFO: 2021-10-21 12:41:10,361 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_2 - default - train 1|datasets | INFO: 2021-10-21 12:41:10,713 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_3 - default - train 1|datasets | INFO: 2021-10-21 12:41:11,062 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_4 - default - train 1|datasets | INFO: 2021-10-21 12:41:11,408 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_5 - default - train 1|datasets | INFO: 2021-10-21 12:41:11,759 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_6 - default - train 1|datasets | INFO: 2021-10-21 12:41:12,133 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_7 - default - train 1|datasets | INFO: 2021-10-21 12:41:12,533 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_8 - default - train 1|datasets | INFO: 2021-10-21 12:41:12,889 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_9 - default - train 1|datasets | INFO: 2021-10-21 12:41:13,534 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset Check/regions - default - train 1|datasets | INFO: 2021-10-21 12:41:14,555 - datasets_preview_backend.models.row - could not read all the required rows (20 / 100) from dataset Jikiwa/demo4 - default - train 1|datasets | INFO: 2021-10-21 12:41:22,022 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset aapot/mc4_fi_cleaned - default - train 1|datasets | INFO: 2021-10-21 12:41:22,451 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_multi_headset_segmented_and_chunked - default - train 1|datasets | INFO: 2021-10-21 12:41:22,777 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset ami-wav2vec2/ami_multi_headset_segmented_and_chunked_dummy - default - train 1|datasets | INFO: 2021-10-21 12:41:23,217 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented - default - train 1|datasets | INFO: 2021-10-21 12:41:23,671 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented_and_chunked - default - train 1|datasets | INFO: 2021-10-21 12:41:24,446 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented_and_chunked_dummy - default - train 1|datasets | INFO: 2021-10-21 12:41:24,905 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented_dummy - default - train 1|datasets | INFO: 2021-10-21 12:41:26,263 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset dweb/squad_with_cola_scores - default - train 1|datasets | INFO: 2021-10-21 12:41:27,800 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset flax-community/dummy-oscar-als-32 - default - train 1|datasets | INFO: 2021-10-21 12:41:28,169 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset flax-community/german-common-voice-processed - default - train 1|datasets | INFO: 2021-10-21 12:41:29,734 - datasets_preview_backend.models.row - could not read all the required rows (6 / 100) from dataset huggingface/label-files - default - train 1|datasets | INFO: 2021-10-21 12:41:47,920 - datasets_preview_backend.models.row - could not read all the required rows (12 / 100) from dataset kiyoung2/aistage-mrc - default - train 1|datasets | INFO: 2021-10-21 12:42:10,844 - datasets_preview_backend.models.row - could not read all the required rows (3 / 100) from dataset nlpufg/brwac-pt - default - train 1|datasets | INFO: 2021-10-21 12:42:11,407 - datasets_preview_backend.models.row - could not read all the required rows (3 / 100) from dataset nlpufg/oscar-pt - default - train 1|datasets | INFO: 2021-10-21 12:42:11,818 - datasets_preview_backend.models.row - could not read all the required rows (4 / 100) from dataset patrickvonplaten/common_voice_processed_turkish - default - train 1|datasets | INFO: 2021-10-21 12:42:12,232 - datasets_preview_backend.models.row - could not read all the required rows (3 / 100) from dataset s-myk/test - default - train 1|datasets | INFO: 2021-10-21 12:42:17,065 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset vasudevgupta/natural-questions-validation - default - train ``` All of them are listed as cache_miss in https://datasets-preview.huggingface.tech/valid
Cannot get the config names for some datasets: They all generate this warning: ``` 1|datasets | INFO: 2021-10-21 12:41:10,003 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_1 - default - train 1|datasets | INFO: 2021-10-21 12:41:10,361 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_2 - default - train 1|datasets | INFO: 2021-10-21 12:41:10,713 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_3 - default - train 1|datasets | INFO: 2021-10-21 12:41:11,062 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_4 - default - train 1|datasets | INFO: 2021-10-21 12:41:11,408 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_5 - default - train 1|datasets | INFO: 2021-10-21 12:41:11,759 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_6 - default - train 1|datasets | INFO: 2021-10-21 12:41:12,133 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_7 - default - train 1|datasets | INFO: 2021-10-21 12:41:12,533 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_8 - default - train 1|datasets | INFO: 2021-10-21 12:41:12,889 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset Check/region_9 - default - train 1|datasets | INFO: 2021-10-21 12:41:13,534 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset Check/regions - default - train 1|datasets | INFO: 2021-10-21 12:41:14,555 - datasets_preview_backend.models.row - could not read all the required rows (20 / 100) from dataset Jikiwa/demo4 - default - train 1|datasets | INFO: 2021-10-21 12:41:22,022 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset aapot/mc4_fi_cleaned - default - train 1|datasets | INFO: 2021-10-21 12:41:22,451 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_multi_headset_segmented_and_chunked - default - train 1|datasets | INFO: 2021-10-21 12:41:22,777 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset ami-wav2vec2/ami_multi_headset_segmented_and_chunked_dummy - default - train 1|datasets | INFO: 2021-10-21 12:41:23,217 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented - default - train 1|datasets | INFO: 2021-10-21 12:41:23,671 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented_and_chunked - default - train 1|datasets | INFO: 2021-10-21 12:41:24,446 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented_and_chunked_dummy - default - train 1|datasets | INFO: 2021-10-21 12:41:24,905 - datasets_preview_backend.models.row - could not read all the required rows (7 / 100) from dataset ami-wav2vec2/ami_single_headset_segmented_dummy - default - train 1|datasets | INFO: 2021-10-21 12:41:26,263 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset dweb/squad_with_cola_scores - default - train 1|datasets | INFO: 2021-10-21 12:41:27,800 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset flax-community/dummy-oscar-als-32 - default - train 1|datasets | INFO: 2021-10-21 12:41:28,169 - datasets_preview_backend.models.row - could not read all the required rows (5 / 100) from dataset flax-community/german-common-voice-processed - default - train 1|datasets | INFO: 2021-10-21 12:41:29,734 - datasets_preview_backend.models.row - could not read all the required rows (6 / 100) from dataset huggingface/label-files - default - train 1|datasets | INFO: 2021-10-21 12:41:47,920 - datasets_preview_backend.models.row - could not read all the required rows (12 / 100) from dataset kiyoung2/aistage-mrc - default - train 1|datasets | INFO: 2021-10-21 12:42:10,844 - datasets_preview_backend.models.row - could not read all the required rows (3 / 100) from dataset nlpufg/brwac-pt - default - train 1|datasets | INFO: 2021-10-21 12:42:11,407 - datasets_preview_backend.models.row - could not read all the required rows (3 / 100) from dataset nlpufg/oscar-pt - default - train 1|datasets | INFO: 2021-10-21 12:42:11,818 - datasets_preview_backend.models.row - could not read all the required rows (4 / 100) from dataset patrickvonplaten/common_voice_processed_turkish - default - train 1|datasets | INFO: 2021-10-21 12:42:12,232 - datasets_preview_backend.models.row - could not read all the required rows (3 / 100) from dataset s-myk/test - default - train 1|datasets | INFO: 2021-10-21 12:42:17,065 - datasets_preview_backend.models.row - could not read all the required rows (2 / 100) from dataset vasudevgupta/natural-questions-validation - default - train ``` All of them are listed as cache_miss in https://datasets-preview.huggingface.tech/valid
closed
2021-10-21T12:47:41Z
2021-10-29T13:36:35Z
2021-10-29T13:36:34Z
severo
1,031,847,967
feat: 🎸 refactor the code + change the /rows response
The code is now split into more small files. Also: the rows and features are no checked so that the response is coherent. The /rows response format has changed, it returns columns and rows (instead of features and rows), and the columns and rows are now ensured to be coherent. BREAKING CHANGE: 🧨 change format of /rows response
feat: 🎸 refactor the code + change the /rows response: The code is now split into more small files. Also: the rows and features are no checked so that the response is coherent. The /rows response format has changed, it returns columns and rows (instead of features and rows), and the columns and rows are now ensured to be coherent. BREAKING CHANGE: 🧨 change format of /rows response
closed
2021-10-20T21:32:29Z
2021-10-20T21:32:41Z
2021-10-20T21:32:40Z
severo
1,030,378,192
Cifar10
null
Cifar10:
closed
2021-10-19T14:25:14Z
2021-10-19T14:25:27Z
2021-10-19T14:25:26Z
severo
1,030,295,465
Dependencies
null
Dependencies:
closed
2021-10-19T13:14:47Z
2021-10-19T14:07:30Z
2021-10-19T14:07:27Z
severo
1,030,293,223
Support gated datasets
See https://huggingface.co/datasets/oscar-corpus/OSCAR-2109, https://huggingface.co/datasets/mozilla-foundation/common_voice_1_0 Note: we restrict this issue to the *public* gated datasets.
Support gated datasets: See https://huggingface.co/datasets/oscar-corpus/OSCAR-2109, https://huggingface.co/datasets/mozilla-foundation/common_voice_1_0 Note: we restrict this issue to the *public* gated datasets.
closed
2021-10-19T13:12:55Z
2022-01-26T11:15:09Z
2022-01-26T11:13:45Z
severo
1,030,250,830
isolate the generation of every cache entry
The community datasets contain arbitrary code in their .py scripts: we should run them in isolation, in order to avoid security issues.
isolate the generation of every cache entry: The community datasets contain arbitrary code in their .py scripts: we should run them in isolation, in order to avoid security issues.
closed
2021-10-19T12:33:02Z
2022-06-08T08:42:14Z
2022-06-08T08:42:14Z
severo
1,030,098,956
feat: 🎸 add fields "tags" and "downloads" to /cache-reports
null
feat: 🎸 add fields "tags" and "downloads" to /cache-reports:
closed
2021-10-19T09:52:48Z
2021-10-19T09:55:15Z
2021-10-19T09:55:15Z
severo
1,029,287,110
Download and cache the images and other files?
Fields with an image URL are detected, and the "ImageUrl" type is passed in the features, to let the client (moonlanding) put the URL in `<img src="..." />`. This means that pages such as https://hf.co/datasets/severo/wit will download images directly from Wikipedia for example. Hotlinking presents various [issues](https://en.wikipedia.org/wiki/Inline_linking#Controversial_uses_of_inline_linking). In particular, it's harder for us to know for sure if the image really exists or if it has an error. It might also generate a lot of traffic to other websites. Thus: we might want to download the images as an asset in the backend, then serve them directly. Coding a good downloading bot is not easy ([User-Agent](https://meta.wikimedia.org/wiki/User-Agent_policy), avoid reaching rate-limits, detect the filename, detect the mime-type/extension, etc.) Related: https://github.com/huggingface/datasets/issues/3105
Download and cache the images and other files?: Fields with an image URL are detected, and the "ImageUrl" type is passed in the features, to let the client (moonlanding) put the URL in `<img src="..." />`. This means that pages such as https://hf.co/datasets/severo/wit will download images directly from Wikipedia for example. Hotlinking presents various [issues](https://en.wikipedia.org/wiki/Inline_linking#Controversial_uses_of_inline_linking). In particular, it's harder for us to know for sure if the image really exists or if it has an error. It might also generate a lot of traffic to other websites. Thus: we might want to download the images as an asset in the backend, then serve them directly. Coding a good downloading bot is not easy ([User-Agent](https://meta.wikimedia.org/wiki/User-Agent_policy), avoid reaching rate-limits, detect the filename, detect the mime-type/extension, etc.) Related: https://github.com/huggingface/datasets/issues/3105
closed
2021-10-18T15:37:59Z
2022-09-16T20:09:24Z
2022-09-16T20:09:24Z
severo
1,029,063,986
Support audio datasets
null
Support audio datasets:
closed
2021-10-18T12:41:58Z
2021-12-21T10:16:52Z
2021-12-21T10:16:52Z
severo
1,026,363,289
Image endpoint
null
Image endpoint:
closed
2021-10-14T12:54:37Z
2021-10-14T12:58:17Z
2021-10-14T12:58:16Z
severo
1,025,148,095
feat: 🎸 enable food101 dataset
See https://github.com/huggingface/datasets/pull/3066 Also: - encode bytes data as UTF-8 encoded base64 in JSON responses - use Z standard for dates - ensure we use the same version of datasets for the datasets and the code
feat: 🎸 enable food101 dataset: See https://github.com/huggingface/datasets/pull/3066 Also: - encode bytes data as UTF-8 encoded base64 in JSON responses - use Z standard for dates - ensure we use the same version of datasets for the datasets and the code
closed
2021-10-13T11:50:06Z
2021-10-13T11:54:37Z
2021-10-13T11:54:37Z
severo
1,023,986,750
fix: πŸ› fix serialization errors with datetime in rows
null
fix: πŸ› fix serialization errors with datetime in rows:
closed
2021-10-12T15:58:36Z
2021-10-12T16:06:22Z
2021-10-12T16:06:21Z
severo