luow-amd commited on
Commit
ed0af9b
1 Parent(s): 959b11c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -3
README.md CHANGED
@@ -1,3 +1,75 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
4
+ ---
5
+
6
+ # Mixtral-8x7B-Instruct-v0.1-FP8-KV
7
+ - ## Introduction
8
+ This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
9
+ - ## Quantization Stragegy
10
+ - ***Quantized Layers***: All linear layers excluding "lm_head", "*gate"
11
+ - ***Weight***: FP8 symmetric per-tensor
12
+ - ***Activation***: FP8 symmetric per-tensor
13
+ - ***KV Cache***: FP8 symmetric per-tensor
14
+ - ## Quick Start
15
+ 1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html)
16
+ 2. Run the quantization script in the example folder using the following command line:
17
+ ```sh
18
+ export MODEL_DIR = [local model checkpoint folder] or mistralai/Mixtral-8x7B-Instruct-v0.1
19
+ # single GPU
20
+ python3 quantize_quark.py \
21
+ --model_dir $MODEL_DIR \
22
+ --output_dir Mixtral-8x7B-Instruct-v0.1-FP8-KV \
23
+ --quant_scheme w_fp8_a_fp8 \
24
+ --kv_cache_dtype fp8 \
25
+ --num_calib_data 128 \
26
+ --model_export quark_safetensors
27
+ # If model size is too large for single GPU, please use multi GPU instead.
28
+ python3 quantize_quark.py \
29
+ --model_dir $MODEL_DIR \
30
+ --output_dir Mixtral-8x7B-Instruct-v0.1-FP8-KV \
31
+ --quant_scheme w_fp8_a_fp8 \
32
+ --kv_cache_dtype fp8 \
33
+ --num_calib_data 128 \
34
+ --model_export quark_safetensors \
35
+ --multi_gpu
36
+ ```
37
+ ## Deployment
38
+ Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
39
+ ## Evaluation
40
+ Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py.
41
+ The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
42
+ #### Evaluation scores
43
+ <table>
44
+ <tr>
45
+ <td><strong>Benchmark</strong>
46
+ </td>
47
+ <td><strong>Mixtral-8x7B-Instruct-v0.1 </strong>
48
+ </td>
49
+ <td><strong>Mixtral-8x7B-Instruct-v0.1-FP8-KV(this model)</strong>
50
+ </td>
51
+ </tr>
52
+ <tr>
53
+ <td>Perplexity-wikitext2
54
+ </td>
55
+ <td>4.1391
56
+ </td>
57
+ <td>4.2187
58
+ </td>
59
+ </tr>
60
+ </table>
61
+
62
+ #### License
63
+ Copyright (c) 2018-2024 Advanced Micro Devices, Inc. All Rights Reserved.
64
+
65
+ Licensed under the Apache License, Version 2.0 (the "License");
66
+ you may not use this file except in compliance with the License.
67
+ You may obtain a copy of the License at
68
+
69
+ http://www.apache.org/licenses/LICENSE-2.0
70
+
71
+ Unless required by applicable law or agreed to in writing, software
72
+ distributed under the License is distributed on an "AS IS" BASIS,
73
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
74
+ See the License for the specific language governing permissions and
75
+ limitations under the License.