language: jv
tags:
- javanese-bert-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: Fast and Furious iku film sing [MASK].
Javanese BERT Small IMDB
Javanese BERT Small IMDB is a masked language model based on the BERT model. It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained Javanese BERT Small model and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 19.87 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial notebook written by Sylvain Gugger.
Hugging Face's Trainer
class from the Transformers library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
Model
Model | #params | Arch. | Training/Validation data (text) |
---|---|---|---|
javanese-bert-small-imdb |
110M | BERT Small | Javanese IMDB (47.5 MB of text) |
Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
train loss | valid loss | perplexity | total time |
---|---|---|---|
3.070 | 2.989 | 19.87 | 3:12:33 |
How to Use
As Masked Language Model
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
Feature Extraction in PyTorch
from transformers import BertModel, BertTokenizerFast
pretrained_name = "w11wo/javanese-bert-small-imdb"
model = BertModel.from_pretrained(pretrained_name)
tokenizer = BertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
Author
Javanese BERT Small was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access.
Citation
If you use any of our models in your research, please cite:
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}