pbevan11's picture
End of training
0e38eff verified
|
raw
history blame
No virus
3.82 kB
metadata
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
license: llama3
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: llama-3-8b-ocr-correction
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: meta-llama/Meta-Llama-3-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: true
strict: false

lora_fan_in_fan_out: false
data_seed: 49
seed: 49

datasets:
  - path: ft_data/alpaca_data.jsonl
    type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-alpaca-out
hub_model_id: pbevan11/llama-3-8b-ocr-correction

adapter: qlora
lora_model_dir:

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project: ocr-ft
wandb_entity: sncds
wandb_name: test

gradient_accumulation_steps: 4
micro_batch_size: 2 # was 16
eval_batch_size: 2 # was 16
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3

warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  pad_token: "<|end_of_text|>"

Visualize in Weights & Biases

llama-3-8b-ocr-correction

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 49
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.6611 0.0165 1 0.6229
0.3149 0.2469 15 0.2870
0.2074 0.4938 30 0.2166
0.2211 0.7407 45 0.1937
0.195 0.9877 60 0.1825
0.1411 1.2140 75 0.1787
0.1348 1.4609 90 0.1760
0.1479 1.7078 105 0.1743
0.1413 1.9547 120 0.1742

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.3
  • Pytorch 2.1.2+cu118
  • Datasets 2.19.1
  • Tokenizers 0.19.1