TheBloke commited on
Commit
de3eb73
1 Parent(s): 85a646d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -2
README.md CHANGED
@@ -6,15 +6,48 @@ This repo contains the weights of the Koala 7B model produced at Berkeley. It is
6
 
7
  This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
8
 
9
- ### WARNING: At the present time the GPTQ files uploaded here are producing garbage output. It is not recommended to use them.
 
10
 
11
- I'm working on diagnosing this issue and producing working files.
 
 
12
 
13
  Quantization command was:
14
  ```
15
  python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt
16
  ```
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  Check out the following links to learn more about the Berkeley Koala model.
19
  * [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
20
  * [Online demo](https://koala.lmsys.org/)
 
6
 
7
  This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
8
 
9
+ For the unquantized model in HF format, see this repo: https://huggingface.co/TheBloke/koala-7B-HF
10
+ For the unquantized model in GGML format for llama.cpp, see this repo: https://huggingface.co/TheBloke/koala-7b-ggml-unquantized
11
 
12
+ ### WARNING: At the present time the GPTQ files uploaded here seem to be producing garbage output. It is not recommended to use them.
13
+
14
+ I'm working on diagnosing this issue. If you manage to get the files working, please let me know!
15
 
16
  Quantization command was:
17
  ```
18
  python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt
19
  ```
20
 
21
+ The Koala delta weights were originally merged using the following commands, producing [koala-7B-HF](https://huggingface.co/TheBloke/koala-7B-HF):
22
+ ```
23
+ git clone https://github.com/young-geng/EasyLM
24
+
25
+ git clone https://huggingface.co/nyanko7/LLaMA-7B
26
+
27
+ git clone https://huggingface.co/young-geng/koala koala_diffs
28
+
29
+ cd EasyLM
30
+
31
+ PYTHON_PATH="${PWD}:$PYTHONPATH" python \
32
+ -m EasyLM.models.llama.convert_torch_to_easylm \
33
+ --checkpoint_dir=/content/LLaMA-7B \
34
+ --output_file=/content/llama-7B-LM \
35
+ --streaming=True
36
+
37
+ PYTHON_PATH="${PWD}:$PYTHONPATH" python \
38
+ -m EasyLM.scripts.diff_checkpoint --recover_diff=True \
39
+ --load_base_checkpoint='params::/content/llama-7B-LM' \
40
+ --load_target_checkpoint='params::/content/koala_diffs/koala_7b_diff_v2' \
41
+ --output_file=/content/koala_7b.diff.weights \
42
+ --streaming=True
43
+
44
+ PYTHON_PATH="${PWD}:$PYTHONPATH" python \
45
+ -m EasyLM.models.llama.convert_easylm_to_hf --model_size=7b \
46
+ --output_dir=/content/koala-7B-HF \
47
+ --load_checkpoint='params::/content/koala_7b.diff.weights' \
48
+ --tokenizer_path=/content/LLaMA-7B/tokenizer.model
49
+ ```
50
+
51
  Check out the following links to learn more about the Berkeley Koala model.
52
  * [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
53
  * [Online demo](https://koala.lmsys.org/)