Edit model card

Sundanese GPT-2 Base

Sundanese GPT-2 Base is a causal language model based on the OpenAI GPT-2 model. It was trained on four datasets: OSCAR's unshuffled_deduplicated_su subset, the Sundanese mC4 subset, the Sundanese CC100 subset, and Sundanese Wikipedia.

10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 3.61 and an evaluation perplexity of 36.97.

This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the Files and versions tab, as well as the Training metrics logged via Tensorboard.

Model

Model #params Arch. Training/Validation data (text)
sundanese-gpt2-base 124M GPT-2 OSCAR, mC4, CC100, Wikipedia (758 MB)

Evaluation Results

The model was trained for 50 epochs and the following is the final result once the training ended.

train loss valid loss valid PPL total time
2.436 3.61 36.97 7:1:54

How to Use

As Causal Language Model

from transformers import pipeline

pretrained_name = "w11wo/sundanese-gpt2-base"

nlp = pipeline(
    "text-generation",
    model=pretrained_name,
    tokenizer=pretrained_name
)

nlp("Nami abdi Budi, ti Indonésia")

Feature Extraction in PyTorch

from transformers import GPT2Model, GPT2TokenizerFast

pretrained_name = "w11wo/sundanese-gpt2-base"
model = GPT2Model.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)

prompt = "Nami abdi Budi, ti Indonésia"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)

Disclaimer

Do consider the biases which came from all four datasets that may be carried over into the results of this model.

Author

Sundanese GPT-2 Base was trained and evaluated by Wilson Wongso.

Citation Information

@article{rs-907893,
    author   = {Wongso, Wilson
                and Lucky, Henry
                and Suhartono, Derwin},
    journal  = {Journal of Big Data},
    year     = {2022},
    month    = {Feb},
    day      = {26},
    abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
    issn     = {2693-5015},
    doi      = {10.21203/rs.3.rs-907893/v1},
    url      = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
Downloads last month
41
Safetensors
Model size
137M params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train w11wo/sundanese-gpt2-base

Collection including w11wo/sundanese-gpt2-base