Datasets:

Modalities:
Text
Formats:
json
Languages:
Hebrew
Libraries:
Datasets
pandas
License:

Dataset Viewer issue: ConfigNamesError

#2
by NMinsker - opened

The dataset viewer is not working.

Error details:

Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in HebArabNlpProject/HebNLI
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 72, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1904, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1270, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 597, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in HebArabNlpProject/HebNLI

cc @albertvillanova @lhoestq @severo .

It seems that due to the last update, it is no longer possible to use:
load_dataset("HebArabNlpProject/HebNLI")

In order to load the dataset in a python session

Israel National NLP Program org

@NMinsker Should be fixed now, please confirm.

Thank you Shaltiel, it seems that the issue with data card viewer is resolved now, however, still not working in python (I think only the test set is broken):
ds = load_dataset("HebArabNlpProject/HebNLI")

Traceback:
{
"name": "DatasetGenerationCastError",
"message": "An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'hebrew_label'})

This happened while the json dataset builder was generating data using

hf://datasets/HebArabNlpProject/HebNLI/HebNLI_test.jsonl (at revision 0588d6ff0b36ff0bf979c853410194798dfddb60)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)",
"stack": "---------------------------------------------------------------------------
CastError Traceback (most recent call last)
File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\builder.py:2013, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
2012 try:
-> 2013 writer.write_table(table)
2014 except CastError as cast_error:

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\arrow_writer.py:585, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
584 pa_table = pa_table.combine_chunks()
--> 585 pa_table = table_cast(pa_table, self._schema)
586 if self.embed_local_files:

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\table.py:2302, in table_cast(table, schema)
2301 if table.schema != schema:
-> 2302 return cast_table_to_schema(table, schema)
2303 elif table.schema.metadata != schema.metadata:

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\table.py:2256, in cast_table_to_schema(table, schema)
2255 if sorted(table.column_names) != sorted(features):
-> 2256 raise CastError(
2257 f"Couldn't cast
{_short_str(table.schema)}
to
{_short_str(features)}
because column names don't match",
2258 table_column_names=table.column_names,
2259 requested_column_names=list(features),
2260 )
2261 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]

CastError: Couldn't cast
original_annotator_labels: string
genre: string
original_label: string
pairID: string
promptID: int64
sentence1: string
translation1: string
sentence2: string
translation2: string
hebrew_label: string
to
{'original_annotator_labels': Value(dtype='string', id=None), 'genre': Value(dtype='string', id=None), 'original_label': Value(dtype='string', id=None), 'pairID': Value(dtype='string', id=None), 'promptID': Value(dtype='int64', id=None), 'sentence1': Value(dtype='string', id=None), 'translation1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'translation2': Value(dtype='string', id=None)}
because column names don't match

During handling of the above exception, another exception occurred:

DatasetGenerationCastError Traceback (most recent call last)
Cell In[3], line 1
----> 1 ds = load_dataset("HebArabNlpProject/HebNLI")

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\load.py:2616, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2613 return builder_instance.as_streaming_dataset(split=split)
2615 # Download and prepare data
-> 2616 builder_instance.download_and_prepare(
2617 download_config=download_config,
2618 download_mode=download_mode,
2619 verification_mode=verification_mode,
2620 num_proc=num_proc,
2621 storage_options=storage_options,
2622 )
2624 # Build dataset for splits
2625 keep_in_memory = (
2626 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2627 )

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\builder.py:1029, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
1027 if num_proc is not None:
1028 prepare_split_kwargs["num_proc"] = num_proc
-> 1029 self._download_and_prepare(
1030 dl_manager=dl_manager,
1031 verification_mode=verification_mode,
1032 **prepare_split_kwargs,
1033 **download_and_prepare_kwargs,
1034 )
1035 # Sync info
1036 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\builder.py:1124, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1120 split_dict.add(split_generator.split_info)
1122 try:
1123 # Prepare split will record examples associated to the split
-> 1124 self._prepare_split(split_generator, **prepare_split_kwargs)
1125 except OSError as e:
1126 raise OSError(
1127 "Cannot find data file. "
1128 + (self.manual_download_instructions or "")
1129 + "
Original error:
"
1130 + str(e)
1131 ) from None

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\builder.py:1884, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1882 job_id = 0
1883 with pbar:
-> 1884 for job_id, done, content in self._prepare_split_single(
1885 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1886 ):
1887 if done:
1888 result = content

File c:\Users\Mintz\miniconda3\envs\NLP\lib\site-packages\datasets\builder.py:2015, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
2013 writer.write_table(table)
2014 except CastError as cast_error:
-> 2015 raise DatasetGenerationCastError.from_cast_error(
2016 cast_error=cast_error,
2017 builder_name=self.info.builder_name,
2018 gen_kwargs=gen_kwargs,
2019 token=self.token,
2020 )
2021 num_examples_progress_update += len(table)
2022 if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:

DatasetGenerationCastError: An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'hebrew_label'})

This happened while the json dataset builder was generating data using

hf://datasets/HebArabNlpProject/HebNLI/HebNLI_test.jsonl (at revision 0588d6ff0b36ff0bf979c853410194798dfddb60)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)"
}

All the data files must have the same columns, but at some point there are 1 new columns ({'hebrew_label'})

The built-in dataset viewer also fails due to this error

Screenshot 2024-10-30 at 15.31.22.png

Hi ! I opened https://huggingface.co/datasets/HebArabNlpProject/HebNLI/discussions/3 to fix the issue

I simply set the missing column type explicitly in the README.md

Israel National NLP Program org

Thank you @lhoestq !
@Norod78 Can you confirm that it works now?

Awesome @lhoestq ^_^
@Shaltiel Yes, it works well now

Sign up or log in to comment