NikolayL commited on
Commit
282cd01
1 Parent(s): 12a0cde

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
3
+ datasets:
4
+ - cerebras/SlimPajama-627B
5
+ - bigcode/starcoderdata
6
+ - HuggingFaceH4/ultrachat_200k
7
+ - HuggingFaceH4/ultrafeedback_binarized
8
+ language:
9
+ - en
10
+ license: apache-2.0
11
+ tags:
12
+ - openvino
13
+ widget:
14
+ - example_title: Fibonacci (Python)
15
+ messages:
16
+ - role: system
17
+ content: You are a chatbot who can help code!
18
+ - role: user
19
+ content: Write me a function to calculate the first 10 digits of the fibonacci
20
+ sequence in Python and print it out to the CLI.
21
+ ---
22
+
23
+ This model is a quantized version of [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) and was exported to the OpenVINO format using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space.
24
+
25
+ First make sure you have optimum-intel installed:
26
+
27
+ ```bash
28
+ pip install optimum[openvino]
29
+ ```
30
+
31
+ To load your model you can do as follows:
32
+
33
+ ```python
34
+ from optimum.intel import OVModelForCausalLM
35
+
36
+ model_id = "NikolayL/TinyLlama-1.1B-Chat-v1.0-openvino-int4"
37
+ model = OVModelForCausalLM.from_pretrained(model_id)
38
+ ```