iandennismiller
commited on
Commit
•
2013ae8
1
Parent(s):
5678267
initial commit of 8-bit quantized GGUF
Browse files- .gitattributes +1 -0
- LLama-2-MedText-13b-q8_0.gguf +3 -0
- Readme.md +52 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
LLama-2-MedText-13b-q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6f12f95b748feba8d616514c759e969098339b41da11ad9c173a655a3b48ec19
|
3 |
+
size 13831319392
|
Readme.md
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## LLama-2-MedText-13b-GGUF
|
2 |
+
|
3 |
+
Quantized GGUF of https://huggingface.co/truehealth/LLama-2-MedText-13b
|
4 |
+
|
5 |
+
## Usage
|
6 |
+
|
7 |
+
Interactive [llama.cpp](https://github.com/ggerganov/llama.cpp/) session:
|
8 |
+
|
9 |
+
```bash
|
10 |
+
llama-cpp \
|
11 |
+
--instruct \
|
12 |
+
--color \
|
13 |
+
--in-prefix "[INST] " \
|
14 |
+
--in-suffix "[\INST] " \
|
15 |
+
--model LLama-2-MedText-13b-q8_0.gguf
|
16 |
+
|
17 |
+
== Running in interactive mode. ==
|
18 |
+
- Press Ctrl+C to interject at any time.
|
19 |
+
- Press Return to return control to LLaMa.
|
20 |
+
- To return control without starting a new line, end your input with '/'.
|
21 |
+
- If you want to submit another line, end your input with '\'.
|
22 |
+
|
23 |
+
|
24 |
+
> [INST] How confident are you in your knowledge and abilities?
|
25 |
+
[\INST] [RSP] As an AI language model, I can provide information to the best of my ability based on the resources I was trained on, which were primarily before <DATE>. While I strive to provide useful and accurate responses, my knowledge is not infinite, and I might not be able to provide professional medical advice or predictions in all cases. Additionally, healthcare decisions should always be evaluated in the context of an individual's unique circumstances and should be evaluated by a healthcare professional.
|
26 |
+
```
|
27 |
+
|
28 |
+
## Model card from truehealth/Llama-2-MedText-Delta-Preview
|
29 |
+
|
30 |
+
Trained on https://huggingface.co/datasets/BI55/MedText.
|
31 |
+
|
32 |
+
These are PEFT delta weights and need to be merged into LLama-2-13b to be used for inference.
|
33 |
+
|
34 |
+
library_name: peft
|
35 |
+
|
36 |
+
Training procedure
|
37 |
+
|
38 |
+
The following bitsandbytes quantization config was used during training:
|
39 |
+
|
40 |
+
- load_in_8bit: False
|
41 |
+
- load_in_4bit: True
|
42 |
+
- llm_int8_threshold: 6.0
|
43 |
+
- llm_int8_skip_modules: None
|
44 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
45 |
+
- llm_int8_has_fp16_weight: False
|
46 |
+
- bnb_4bit_quant_type: nf4
|
47 |
+
- bnb_4bit_use_double_quant: True
|
48 |
+
- bnb_4bit_compute_dtype: float16
|
49 |
+
|
50 |
+
Framework versions
|
51 |
+
|
52 |
+
- PEFT 0.5.0.dev0
|