Edit model card

distilbert-base-uncased finetuned on MNLI

Model Details and Training Data

We used the pretrained model from distilbert-base-uncased and finetuned it on MultiNLI dataset.

The training parameters were kept the same as Devlin et al., 2019 (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).

Evaluation Results

The evaluation results are mentioned in the table below.

Test Corpus Accuracy
Matched 0.8223
Mismatched 0.8216
Downloads last month
42
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.