--- library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: NousResearch/Llama-2-7b-hf model-index: - name: llama2_instruct_generation results: [] --- # llama2_instruct_generation This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.6750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8413 | 0.0 | 20 | 1.8102 | | 1.9042 | 0.01 | 40 | 1.7811 | | 1.8402 | 0.01 | 60 | 1.7657 | | 1.8856 | 0.01 | 80 | 1.7511 | | 1.9212 | 0.01 | 100 | 1.7390 | | 1.807 | 0.02 | 120 | 1.7090 | | 1.8321 | 0.02 | 140 | 1.7029 | | 1.871 | 0.02 | 160 | 1.6979 | | 1.848 | 0.02 | 180 | 1.6947 | | 1.8378 | 0.03 | 200 | 1.6908 | | 1.746 | 0.03 | 220 | 1.6893 | | 1.7568 | 0.03 | 240 | 1.6874 | | 1.8227 | 0.04 | 260 | 1.6860 | | 1.8134 | 0.04 | 280 | 1.6835 | | 1.8026 | 0.04 | 300 | 1.6819 | | 1.8267 | 0.04 | 320 | 1.6831 | | 1.7998 | 0.05 | 340 | 1.6816 | | 1.8747 | 0.05 | 360 | 1.6793 | | 1.8478 | 0.05 | 380 | 1.6785 | | 1.8627 | 0.05 | 400 | 1.6776 | | 1.7956 | 0.06 | 420 | 1.6783 | | 1.7184 | 0.06 | 440 | 1.6764 | | 1.7038 | 0.06 | 460 | 1.6753 | | 1.9049 | 0.07 | 480 | 1.6764 | | 1.8113 | 0.07 | 500 | 1.6750 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0