bartowski's picture
Add Exl2 quant link
dd7bf82 verified
|
raw
history blame
4.44 kB
metadata
base_model: alpindale/Mistral-7B-v0.2-hf
tags:
  - generated_from_trainer
model-index:
  - name: workspace/dolphin-2.8-mistral-7b
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.0


base_model: alpindale/Mistral-7B-v0.2-hf
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: /workspace/datasets/dolphin201-sharegpt2.jsonl
    type: sharegpt
  - path: /workspace/datasets/dolphin-coder-translate-sharegpt2.jsonl
    type: sharegpt
  - path: /workspace/datasets/dolphin-coder-codegen-sharegpt2.jsonl
    type: sharegpt
  - path: /workspace/datasets/m-a-p_Code-Feedback-sharegpt.jsonl
    type: sharegpt
  - path: /workspace/datasets/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt.jsonl
    type: sharegpt
  - path: /workspace/datasets/not_samantha_norefusals.jsonl
    type: sharegpt
  - path: /workspace/datasets/openhermes2_5-sharegpt.jsonl
    type: sharegpt

chat_template: chatml

dataset_prepared_path: last_run_prepared
val_set_size: 0.001
output_dir: /workspace/dolphin-2.8-mistral-7b

sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true

wandb_project: dolphin
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 3
num_epochs: 4
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
lr_scheduler: cosine
learning_rate: 0.000005
optimizer: adamw_bnb_8bit

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10

eval_steps: 73
eval_table_size:
eval_table_max_new_tokens:
eval_sample_packing: false
saves_per_epoch: 
save_steps: 73
save_total_limit: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
  eos_token: "<|im_end|>"
tokens:
  - "<|im_start|>"

workspace/dolphin-2.8-mistral-7b

This model is a fine-tuned version of alpindale/Mistral-7B-v0.2-hf on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4828

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 10
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 240
  • total_eval_batch_size: 30
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
1.1736 0.0 1 1.0338
0.6106 0.36 73 0.5439
0.5766 0.72 146 0.5171
0.5395 1.06 219 0.5045
0.5218 1.42 292 0.4976
0.5336 1.78 365 0.4915
0.5018 2.13 438 0.4885
0.5113 2.48 511 0.4856
0.5066 2.84 584 0.4838
0.4967 3.19 657 0.4834
0.4956 3.55 730 0.4830
0.5026 3.9 803 0.4828

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.0

Quants