Remex23 commited on
Commit
aa660c2
1 Parent(s): 7ec205e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -16
README.md CHANGED
@@ -1,20 +1,13 @@
 
1
  ---
2
- library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
-
6
-
7
- The following `bitsandbytes` quantization config was used during training:
8
- - load_in_8bit: False
9
- - load_in_4bit: True
10
- - llm_int8_threshold: 6.0
11
- - llm_int8_skip_modules: None
12
- - llm_int8_enable_fp32_cpu_offload: False
13
- - llm_int8_has_fp16_weight: False
14
- - bnb_4bit_quant_type: nf4
15
- - bnb_4bit_use_double_quant: False
16
- - bnb_4bit_compute_dtype: float16
17
- ### Framework versions
18
 
 
19
 
20
- - PEFT 0.4.0
 
1
+
2
  ---
3
+ language: en
4
+ tags:
5
+ - llama-2
6
+ - fine-tuning
7
+ - causal-lm
8
+ license: apache-2.0
9
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ # Llama-2-finetune-Elsa
12
 
13
+ This is a fine-tuned version of the Llama-2-7b-chat model using the `Remex23/counselchat-llama2-full` dataset.