The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2029, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2040, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
null | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] | a1df46296853828f | null | {} | null | false | null |
Dataset Card for Custom Text Dataset
Dataset Name
Custom Text Dataset for Text Classification (Palestinian Authority and International Criminal Court)
Overview
This custom dataset contains text passages and corresponding labels that summarize key information from the provided sentences. The dataset was created to classify and extract significant details from text related to geopolitical events, such as the Palestinian Authority’s accession to the International Criminal Court (ICC). The dataset is intended for training models on summarization, text classification, and related natural language processing tasks.
- Text Domain: News, Geopolitics, International Relations
- Task Type: Text Classification, Summarization
- Language: English
Composition
- Training Data:
- Sentence: Text passages describing events.
- Labels: Summaries or key information extracted from the text.
- Test Data:
- A sample of articles and highlights taken from a larger dataset (e.g., the raw dataset's test data).
- 100 sentences paired with corresponding highlights (summaries).
Collection Process
The text passages for the custom dataset were manually selected from news articles, focusing on international legal and political events. Sentences related to the accession of the Palestinian Authority to the ICC were curated. The labels are short summaries highlighting key aspects of the text.
- Source: News article text (e.g., CNN)
- Labeling: Summarized by domain experts or curated manually to match the intent of the dataset.
Preprocessing
Before using this dataset for training, the following preprocessing steps are suggested:
- Tokenization: Tokenize the sentences into words or subword units (depending on the model).
- Cleaning: Remove unnecessary characters or artifacts, such as quotation marks, extra spaces, or newline characters.
- Normalization: Convert text to lowercase and standardize punctuation.
How to Use
# Example usage for training a text classification model
from transformers import Trainer, TrainingArguments
# Assuming the dataset is loaded in a huggingface Dataset object format
train_data = custom_train_data
test_data = custom_test_data
# Fine-tuning a text classification model (example with HuggingFace's Trainer API)
training_args = TrainingArguments(
output_dir='./results',
evaluation_strategy="epoch",
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_data,
eval_dataset=test_data,
)
# Start training
trainer.train()
Evaluation
Evaluation of the model can be done using standard text classification metrics:
- Accuracy: Compare predicted summaries or classifications to the labeled text.
- F1-Score: Evaluate the harmonic mean of precision and recall, especially useful for imbalanced datasets.
- BLEU/ROUGE: For summarization, comparing the generated summaries to the reference labels.
Limitations
- Small Sample Size: The dataset is relatively small and may not generalize well to other news topics or geopolitical events.
- Narrow Focus: The dataset is focused on a specific geopolitical event and may not cover other topics extensively.
- Subjectivity in Labels: Labels are summaries and may be subjective, depending on the labeler's interpretation of the event.
Ethical Considerations
- Bias: The dataset may reflect inherent biases from the original news sources, especially on sensitive political topics.
- Data Sensitivity: Since this dataset deals with real-world geopolitical events, careful consideration should be taken when using it for tasks that may influence public opinion or decision-making.
- Privacy: The dataset does not contain personal data, so privacy concerns are minimal.
This dataset is suitable for text classification and summarization tasks related to news articles on international relations and law.
- Downloads last month
- 3