Edit model card

I FUCKING ADDED BAD DATA MADE TO BE USED FOR KTO TO THE TRAIN BY ACCIDENT HAHAHA

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: Qwen/Qwen2-7B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
    type: sharegpt
    conversation: chatml
  - path: NewEden/kaloisazasedhandsomefurry
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/kalo_opus_misc_240827
    type: sharegpt
    conversation: chatml
    type: sharegpt
    conversation: chatml
  - path: AquaV/Chemical-Biological-Safety-Applications-Sharegpt
    type: sharegpt
    conversation: chatml
  - path: AquaV/Energetic-Materials-Sharegpt
    type: sharegpt
    conversation: chatml
  - path: lodrick-the-lafted/NopmWritingStruct
    type: sharegpt
    conversation: chatml
  - path: NewEden/Claude-Instruct-5k
    type: sharegpt
    conversation: chatml
  - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
    type: sharegpt
    conversation: chatml
  - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
    type: sharegpt
    conversation: chatml
  - path: NewEden/Stheno-Data-filtered-8k-subset
    type: sharegpt
    conversation: chatml
  - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
    type: sharegpt
    conversation: chatml
  - path: PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
    type: sharegpt
    conversation: chatml
    
chat_template: chatml
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/out
#sequence_len: 16384
sequence_len: 8192
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true

adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project: henbane 7b
wandb_entity:
wandb_watch:
wandb_name: henbane 7b
wandb_log_model:


plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true


gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
weight_decay: 0.5
special_tokens:

deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json

outputs/out

This model is a fine-tuned version of Qwen/Qwen2-7B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0715

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 64
  • total_eval_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
1.4364 0.0078 1 1.4088
1.1643 0.2499 32 1.1562
1.1112 0.4999 64 1.1158
1.0908 0.7498 96 1.0920
1.0575 0.9998 128 1.0752
0.8988 1.2331 160 1.0832
0.8887 1.4830 192 1.0752
0.8821 1.7330 224 1.0722
0.8939 1.9829 256 1.0715

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
11
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Delta-Vector/Henbane-7b-attempt1

Base model

Qwen/Qwen2-7B
Finetuned
this model

Collection including Delta-Vector/Henbane-7b-attempt1