Create README.md
Browse files# DeBERTa-v3-base-ONNX-quantized
This is the quantized version of [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli), it is a Zero Shot Text Classification model that uses the ONNX Runtime for it's execution. This model is recommended to be used on a CPU and not on a GPU. To use this model, you can checkout my [Huggingface Spaces](https://huggingface.co/spaces/arnabdhar/Zero-Shot-Classification-DeBERTa-Quantized)
```python
# import libraries
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
from optimum.pipelines import pipeline
# load pipeline components
MODEL_ID = "pitangent-ds/deberta-v3-nli-onnx-quantized"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = ORTModelForSequenceClassification.from_pretrained(MODEL_ID)
# load pipeline
classifier = pipeline(task="zero-shot-classification", tokenizer=tokenizer, model=model)
# inference
text = "These shoes that I bought are really good."
candidate_labels = ["positive", "negative"]
output = classifier(text, candidate_labels)
```