Given a text, its output format is: "{ENT_TYPE}:{span}; {ENT_TYPE}:{span}..."
For training speed, we only use the first 10,000 sentences (not documents) from train set; 1,000 sentences from validation set;
we save the model when its val_loss (NLL) reaches the minimum.
The model could be used as a pretrained backbone on downstream fine-tuning NER tasks.
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.