See axolotl config
axolotl version: 0.4.0
adapter: qlora
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
bf16: true
chat_template: inst
dataset_prepared_path: last_run_prepared
datasets:
- conversation: mistral
path: 8fd5e1342aa5463fae5081517560b789/./data/with_function_response/more_functions/function_not_used_one_more_function_training.jsonl
type: sharegpt
- conversation: mistral
path: 8fd5e1342aa5463fae5081517560b789/./data/with_function_response/more_functions/function_used_one_more_function_training.jsonl
type: sharegpt
debug: null
eval_max_new_tokens: 256
eval_steps: 0.05
eval_table_size: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: liuylhf/empower-functions-more-tools-diverse-data-adds-one-more-function
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_model_dir: null
lora_r: 32
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
loss_watchdog_patience: 3
loss_watchdog_threshold: 5.0
lr_scheduler: cosine
micro_batch_size: 2
model_config:
output_router_logits: true
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: paged_adamw_8bit
output_dir: 8fd5e1342aa5463fae5081517560b789/model
pad_to_sequence_len: true
sample_packing: true
save_steps: 0.1
sequence_len: 4096
strict: false
tf32: false
tokenizer_type: LlamaTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.01
wandb_log_model: end
wandb_name: more-tools
wandb_project: function-call
warmup_steps: 10
weight_decay: 0.0
empower-functions-more-tools-diverse-data-adds-one-more-function
This model is a fine-tuned version of mistralai/Mixtral-8x7B-Instruct-v0.1 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0873
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.2516 | 0.0 | 1 | 2.1498 |
0.1342 | 0.05 | 25 | 0.1461 |
0.1297 | 0.1 | 50 | 0.1167 |
0.1098 | 0.15 | 75 | 0.1080 |
0.0895 | 0.2 | 100 | 0.1025 |
0.0985 | 0.25 | 125 | 0.1007 |
0.0987 | 0.3 | 150 | 0.0984 |
0.0988 | 0.35 | 175 | 0.0971 |
0.0989 | 0.4 | 200 | 0.0947 |
0.1109 | 0.45 | 225 | 0.0937 |
0.0957 | 0.5 | 250 | 0.0934 |
0.1038 | 0.55 | 275 | 0.0924 |
0.0969 | 0.6 | 300 | 0.0917 |
0.096 | 0.65 | 325 | 0.0901 |
0.0893 | 0.7 | 350 | 0.0897 |
0.0768 | 0.75 | 375 | 0.0887 |
0.0848 | 0.8 | 400 | 0.0882 |
0.0854 | 0.85 | 425 | 0.0878 |
0.083 | 0.9 | 450 | 0.0874 |
0.0868 | 0.95 | 475 | 0.0873 |
Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
- Downloads last month
- 3
Model tree for liuylhf/empower-functions-more-tools-diverse-data-adds-one-more-function
Base model
mistralai/Mixtral-8x7B-v0.1
Finetuned
mistralai/Mixtral-8x7B-Instruct-v0.1