|
--- |
|
language: |
|
- en |
|
tags: |
|
- fluency |
|
license: apache-2.0 |
|
--- |
|
|
|
This model is an ONNX optimized variant of the original [parrot_fluency_model](https://huggingface.co/prithivida/parrot_fluency_model) model. |
|
The model is specifically optimized for GPUs and may have performance differences when executed on CPUs. |
|
|
|
## How to use |
|
```python |
|
from transformers import AutoTokenizer |
|
from optimum.onnxruntime import ORTModelForSequenceClassification |
|
from optimum.pipelines import pipeline |
|
|
|
# load tokenizer and model weights |
|
tokenizer = AutoTokenizer.from_pretrained('Deepchecks/parrot_fluency_model_onnx') |
|
model = ORTModelForSequenceClassification.from_pretrained('Deepchecks/parrot_fluency_model_onnx') |
|
|
|
# prepare the pipeline and generate inferences |
|
pip = pipeline(task='text-classification', model=onnx_model, tokenizer=tokenizer, device=device, accelerator="ort") |
|
res = pip(user_inputs, batch_size=64, truncation="only_first") |
|
|
|
``` |