NikolayL's picture
Upload README.md with huggingface_hub
282cd01 verified
|
raw
history blame
1.16 kB
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
license: apache-2.0
tags:
- openvino
widget:
- example_title: Fibonacci (Python)
messages:
- role: system
content: You are a chatbot who can help code!
- role: user
content: Write me a function to calculate the first 10 digits of the fibonacci
sequence in Python and print it out to the CLI.
---
This model is a quantized version of [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) and was exported to the OpenVINO format using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space.
First make sure you have optimum-intel installed:
```bash
pip install optimum[openvino]
```
To load your model you can do as follows:
```python
from optimum.intel import OVModelForCausalLM
model_id = "NikolayL/TinyLlama-1.1B-Chat-v1.0-openvino-int4"
model = OVModelForCausalLM.from_pretrained(model_id)
```