Edit model card

llama2 Chat LoRa sft train Stage A on German dataset:

German_Songs,German_Poems,bjoernp_ultrachat_de,OpenSchnabeltier,ultrachat_de,oasst_de,dolly_15k_de,alpaca-gpt4_de,openschnabeltier_de,evol_instruct_de,dolphin_de,booksum_de,airoboros_de & eval VAGOsolutions/MT-Bench-TrueGerman?

Stage B: Resume LoRa training using ORPO and dataset mayflowergmbh/intel_orca_dpo_pairs_de

Oh and I am not GER speaker ^^

Training hyperparameters

python src/train_bash.py --stage sft ... --finetuning_type lora --quantization_bit 4 --template alpaca --rope_scaling linear --flash_attn True --dataset_dir data --dataset German_Songs,German_Poems,bjoernp_ultrachat_de,OpenSchnabeltier,ultrachat_de,oasst_de,dolly_15k_de,alpaca-gpt4_de,openschnabeltier_de,evol_instruct_de,dolphin_de,booksum_de,airoboros_de --cutoff_len 4096 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 0.5 --optim adamw_torch --upcast_layernorm True --use_llama_pro True --bf16 True --lora_rank 512 --lora_alpha 1024 --lora_dropout 0.15 --lora_target all --use_rslora True --additional_target all --create_new_adapter True --plot_loss True

python src/train_bash.py --stage orpo ... --finetuning_type lora --quantization_bit 4 --template alpaca --rope_scaling linear --flash_attn True --dataset_dir data --dataset orca_dpo_de --cutoff_len 4096 --learning_rate 1e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 0.9 --logging_steps 5 --save_steps 250 --warmup_steps 100 --neftune_noise_alpha 0.5 --optim adamw_torch --upcast_layernorm True --use_llama_pro True --report_to none --bf16 True --lora_rank 512 --lora_alpha 1024 --lora_dropout 0.15 --use_rslora True --lora_target all --additional_target all --orpo_beta 0.1 --plot_loss True

The following hyperparameters were used during training:

  • learning_rate: 1e-05 # not Defaut LR as for high rank 512, alpha 1024
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 1.0

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0
Safetensors
Model size
13B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Nekochu/Llama-2-13B-German-ORPO

Adapter
(276)
this model
Adapters
1 model

Datasets used to train Nekochu/Llama-2-13B-German-ORPO

Spaces using Nekochu/Llama-2-13B-German-ORPO 2