text
string |
---|
"सुर्खेत (रासस । विसं २०४६ मा तत्कालीन (...TRUNCATED) |
"सुर्खेत (रासस । विसं २०४६ मा तत्कालीन (...TRUNCATED) |
"सुर्खेत (रासस । विसं २०४६ मा तत्कालीन (...TRUNCATED) |
Nepali Text Dataset (Sampled Chunks)
This repository contains a sample of the Nepali text dataset, with three 500MB cleaned chunks from the larger nepberta
dataset. The dataset is ideal for experimenting with language modeling, NLP, and machine learning projects involving Nepali text.
Dataset Overview
The dataset consists of Nepali text data, specifically sampled to provide a representative subset without requiring the full dataset size. Each chunk contains approximately 500MB of Nepali text data, providing ample content for model training or evaluation purposes.
Structure
The dataset is structured as follows:
- Train split: Three chunks, each approximately 500MB in size.
Dataset Details
- Language: Nepali
- Source: Original Dataset
- Dataset Size: 1.5GB (3 chunks, each 500MB)
Usage
To load the dataset in your projects, you can use the Hugging Face datasets
library as follows:
from datasets import load_dataset
# Load the dataset from the Hugging Face Hub
sampled_dataset = load_dataset("your-username/your-new-dataset-name", split="train")
# Inspect the first sample
print(len(sampled_dataset[0]))
Save only 1 out of three chunks
from datasets import load_dataset
num_chunks_to_save = 1
# Load the dataset from the Hugging Face Hub
sampled_dataset_stream = load_dataset("Aananda-giri/nepberta-sample", split="train", streaming=True)
target_dir = 'nepberta_sample'
import os
if not os.path.exists(target_dir):
os.mkdir(target_dir)
# Save each chunk to a separate text file
for i in range(num_chunks_to_save):
chunk = next(iter(sampled_dataset_stream)) # Get the next chunk
with open(os.path.join(target_dir, f"combined_{i+1}.txt"), "w", encoding="utf-8") as file:
file.write(chunk['text'])
print(f"Saved chunk {i+1} to chunk_{i+1}.txt")
Example Code for Loading Chunks This example code demonstrates how to iterate through the chunks and process them as needed:
for chunk in sampled_dataset:
text_data = chunk["text"]
# Process or analyze text_data as needed
Intended Use
This dataset is suitable for:
- Experimenting on smaller model before training larger model
- Research and development in language processing tasks
- Experimenting with large language models on Nepali text data
- Downloads last month
- 22