Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,54 @@ It was created by merging the deltas provided in the above repo with the origina
|
|
10 |
|
11 |
It was then quantized to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
# Vicuna Model Card
|
14 |
|
15 |
## Model details
|
|
|
10 |
|
11 |
It was then quantized to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
12 |
|
13 |
+
## Provided files
|
14 |
+
|
15 |
+
Two model files are provided. Ideally use the `safetensors` file. Full details below:
|
16 |
+
|
17 |
+
Details of the files provided:
|
18 |
+
* `vicuna-13B-1.1-GPTQ-4bit-128g.safetensors`
|
19 |
+
* `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
|
20 |
+
* Command to create:
|
21 |
+
* `python3 llama.py vicuna-13B-1.1-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors vicuna-13B-1.1-GPTQ-4bit-128g.safetensors`
|
22 |
+
* vicuna-13B-1.1-GPTQ-4bit-128g.safetensors.no-act-order.pt`
|
23 |
+
* `pt` format file, created without the `--act-order` flag.
|
24 |
+
* This file may have slightly lower quality, but is included as it can be used without needing to compile the latest GPTQ-for-LLaMa code.
|
25 |
+
* It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
|
26 |
+
* Command to create:
|
27 |
+
* `python3 llama.py vicuna-13B-1.1-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors vicuna-13B-1.1-GPTQ-4bit-128g.no-act-order.pt`
|
28 |
+
|
29 |
+
## How to run in `text-generation-webui`
|
30 |
+
|
31 |
+
File `vicuna-13B-1.1-GPTQ-4bit-128g.no-act-order.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
32 |
+
|
33 |
+
The other two model files were created with the latest GPTQ code, and require that the latest GPTQ-for-LLaMa is used inside the UI.
|
34 |
+
|
35 |
+
Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
|
36 |
+
```
|
37 |
+
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
38 |
+
git clone https://github.com/oobabooga/text-generation-webui
|
39 |
+
mkdir -p text-generation-webui/repositories
|
40 |
+
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
|
41 |
+
```
|
42 |
+
|
43 |
+
Then install this model into `text-generation-webui/models` and launch the UI as follows:
|
44 |
+
```
|
45 |
+
cd text-generation-webui
|
46 |
+
python server.py --model vicuna-13B-1.1-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
|
47 |
+
```
|
48 |
+
|
49 |
+
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
|
50 |
+
|
51 |
+
If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:
|
52 |
+
```
|
53 |
+
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
|
54 |
+
cd GPTQ-for-LLaMa
|
55 |
+
python setup_cuda.py install
|
56 |
+
```
|
57 |
+
Then link that into `text-generation-webui/repositories` as described above.
|
58 |
+
|
59 |
+
Or just use `vicuna-13B-1.1-GPTQ-4bit-128g.no-act-order.pt` as mentioned above.
|
60 |
+
|
61 |
# Vicuna Model Card
|
62 |
|
63 |
## Model details
|