Edit model card

BR_BERTo

Portuguese (Brazil) model for text inference.

Params

Trained on a corpus of 6_993_330 sentences.

  • Vocab size: 150_000
  • RobertaForMaskedLM size : 512
  • Num train epochs: 3
  • Time to train: ~10days (on GCP with a Nvidia T4)

I follow the great tutorial from HuggingFace team:

How to train a new language model from scratch using Transformers and Tokenizers

More infor here:

BR_BERTo

Downloads last month
35
Safetensors
Model size
174M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.