YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization Parameters
Weight compression was performed using nncf.compress_weights
with the following parameters:
- mode: INT8 For more information on quantization, check the OpenVINO model optimization guide.
Compatibility
The provided OpenVINO™ IR model is compatible with:
- OpenVINO version 2024.1.0 and higher
- Optimum Intel 1.16.0 and higher
Running Model Inference
- Install packages required for using Optimum Intel integration with the OpenVINO backend:
pip install optimum[openvino]
- Run model inference:
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "El-chapoo/qwen2_1.5B_8_int8.ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("def print_hello_world():", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.