Edit model card

INT8 MiniLM-L12-H384-uncased-mrpc

Post-training dynamic quantization

ONNX

This is an INT8 ONNX model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model Intel/MiniLM-L12-H384-uncased-mrpc.

Test result

INT8 FP32
Accuracy (eval-f1) 0.9107 0.9097
Model size (MB) 33 128

Load ONNX model:

from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/MiniLM-L12-H384-uncased-mrpc-int8-dynamic')
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including Intel/MiniLM-L12-H384-uncased-mrpc-int8-dynamic-inc