chenhugging
commited on
Commit
•
070f885
1
Parent(s):
e471c8e
Update README.md
Browse files
README.md
CHANGED
@@ -18,20 +18,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
This model is a fine-tuned version of [chargoddard/Yi-6B-Llama](https://huggingface.co/chargoddard/Yi-6B-Llama) on the oncc_medqa_instruct dataset.
|
20 |
|
21 |
-
## Model description
|
22 |
-
|
23 |
-
More information needed
|
24 |
-
|
25 |
-
## Intended uses & limitations
|
26 |
-
|
27 |
-
More information needed
|
28 |
-
|
29 |
-
## Training and evaluation data
|
30 |
-
|
31 |
-
More information needed
|
32 |
-
|
33 |
## Training procedure
|
34 |
|
|
|
|
|
|
|
|
|
35 |
### Training hyperparameters
|
36 |
|
37 |
The following hyperparameters were used during training:
|
|
|
18 |
|
19 |
This model is a fine-tuned version of [chargoddard/Yi-6B-Llama](https://huggingface.co/chargoddard/Yi-6B-Llama) on the oncc_medqa_instruct dataset.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Training procedure
|
22 |
|
23 |
+
```
|
24 |
+
accelerate launch --config_file accelerate_config.yaml src/train_bash.py --stage sft --do_train True --model_name_or_path /workspace/model --finetuning_type lora --quantization_bit 4 --flash_attn True --dataset_dir data --cutoff_len 1024 --learning_rate 0.0005 --num_train_epochs 1.0 --max_samples 10000 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 100 --warmup_steps 20 --neftune_noise_alpha 0.5 --lora_rank 8 --lora_dropout 0.2 --output_dir /workspace/model-update --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lora_target q_proj,v_proj --template llama2 --dataset oncc_medqa_instruct
|
25 |
+
```
|
26 |
+
|
27 |
### Training hyperparameters
|
28 |
|
29 |
The following hyperparameters were used during training:
|