shawgpt-ft / README.md
avramesh's picture
avramesh/ft-attempt2
659392e verified
metadata
license: llama3
library_name: peft
tags:
  - generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
  - name: shawgpt-ft
    results: []

shawgpt-ft

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1847

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.2637 0.9992 652 1.1808
1.142 2.0 1305 1.1452
1.0811 2.9992 1957 1.1300
1.0268 4.0 2610 1.1262
0.9815 4.9992 3262 1.1269
0.9389 6.0 3915 1.1323
0.9061 6.9992 4567 1.1498
0.8749 8.0 5220 1.1575
0.8523 8.9992 5872 1.1676
0.8351 9.9923 6520 1.1847

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.3
  • Pytorch 2.1.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1