--- library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: NousResearch/Llama-2-7b-hf model-index: - name: llama2_instruct_generation results: [] --- # llama2_instruct_generation This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.6734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9364 | 0.0 | 20 | 1.8092 | | 1.9198 | 0.01 | 40 | 1.7826 | | 1.8451 | 0.01 | 60 | 1.7675 | | 1.8487 | 0.01 | 80 | 1.7573 | | 1.8667 | 0.01 | 100 | 1.7435 | | 1.7463 | 0.02 | 120 | 1.7132 | | 1.7789 | 0.02 | 140 | 1.7038 | | 1.8167 | 0.02 | 160 | 1.7008 | | 1.8654 | 0.02 | 180 | 1.6944 | | 1.9158 | 0.03 | 200 | 1.6939 | | 1.6581 | 0.03 | 220 | 1.6909 | | 1.793 | 0.03 | 240 | 1.6896 | | 1.7878 | 0.04 | 260 | 1.6872 | | 1.7542 | 0.04 | 280 | 1.6862 | | 1.7723 | 0.04 | 300 | 1.6863 | | 1.7606 | 0.04 | 320 | 1.6832 | | 1.8054 | 0.05 | 340 | 1.6802 | | 1.7307 | 0.05 | 360 | 1.6803 | | 1.8278 | 0.05 | 380 | 1.6790 | | 1.7912 | 0.05 | 400 | 1.6768 | | 1.7826 | 0.06 | 420 | 1.6749 | | 1.8975 | 0.06 | 440 | 1.6756 | | 1.8395 | 0.06 | 460 | 1.6763 | | 1.8319 | 0.07 | 480 | 1.6749 | | 1.7879 | 0.07 | 500 | 1.6734 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1