05/11/2024 18:42:43 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/tokenizer.json 05/11/2024 18:42:43 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None 05/11/2024 18:42:43 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/special_tokens_map.json 05/11/2024 18:42:43 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/tokenizer_config.json 05/11/2024 18:42:43 - WARNING - transformers.tokenization_utils_base - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 05/11/2024 18:42:43 - INFO - llmtuner.data.template - Add pad token: <|eot_id|> 05/11/2024 18:42:43 - INFO - llmtuner.data.loader - Loading dataset alpaca_data_en_52k.json... 05/11/2024 18:42:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 18:42:59 - INFO - transformers.configuration_utils - Model config LlamaConfig { "_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 18:43:00 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/model.safetensors.index.json 05/11/2024 19:09:02 - INFO - transformers.modeling_utils - Instantiating LlamaForCausalLM model under default dtype torch.float16. 05/11/2024 19:09:02 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 128000, "eos_token_id": 128001 } 05/11/2024 19:09:07 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing LlamaForCausalLM. 05/11/2024 19:09:07 - INFO - transformers.modeling_utils - All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Meta-Llama-3-8B-Instruct. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. 05/11/2024 19:09:08 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/generation_config.json 05/11/2024 19:09:08 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { "bos_token_id": 128000, "do_sample": true, "eos_token_id": [ 128001, 128009 ], "max_length": 4096, "temperature": 0.6, "top_p": 0.9 } 05/11/2024 19:09:08 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled. 05/11/2024 19:09:08 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. 05/11/2024 19:09:08 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA 05/11/2024 19:09:08 - INFO - llmtuner.model.loader - trainable params: 3407872 || all params: 8033669120 || trainable%: 0.0424 05/11/2024 19:09:08 - INFO - transformers.trainer - Using auto half precision backend 05/11/2024 19:09:08 - INFO - transformers.trainer - ***** Running training ***** 05/11/2024 19:09:08 - INFO - transformers.trainer - Num examples = 52,002 05/11/2024 19:09:08 - INFO - transformers.trainer - Num Epochs = 3 05/11/2024 19:09:08 - INFO - transformers.trainer - Instantaneous batch size per device = 2 05/11/2024 19:09:08 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 05/11/2024 19:09:08 - INFO - transformers.trainer - Gradient Accumulation steps = 8 05/11/2024 19:09:08 - INFO - transformers.trainer - Total optimization steps = 9,750 05/11/2024 19:09:08 - INFO - transformers.trainer - Number of trainable parameters = 3,407,872 05/11/2024 19:09:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.9010, 'learning_rate': 5.0000e-05, 'epoch': 0.00} 05/11/2024 19:09:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.7519, 'learning_rate': 5.0000e-05, 'epoch': 0.00} 05/11/2024 19:09:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.8846, 'learning_rate': 5.0000e-05, 'epoch': 0.00} 05/11/2024 19:09:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.7849, 'learning_rate': 4.9999e-05, 'epoch': 0.01} 05/11/2024 19:10:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.5929, 'learning_rate': 4.9999e-05, 'epoch': 0.01} 05/11/2024 19:10:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.4525, 'learning_rate': 4.9999e-05, 'epoch': 0.01} 05/11/2024 19:10:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.6760, 'learning_rate': 4.9998e-05, 'epoch': 0.01} 05/11/2024 19:10:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.4053, 'learning_rate': 4.9998e-05, 'epoch': 0.01} 05/11/2024 19:10:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.3950, 'learning_rate': 4.9997e-05, 'epoch': 0.01} 05/11/2024 19:10:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.5014, 'learning_rate': 4.9997e-05, 'epoch': 0.02} 05/11/2024 19:11:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2834, 'learning_rate': 4.9996e-05, 'epoch': 0.02} 05/11/2024 19:11:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.4268, 'learning_rate': 4.9995e-05, 'epoch': 0.02} 05/11/2024 19:11:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.3300, 'learning_rate': 4.9995e-05, 'epoch': 0.02} 05/11/2024 19:11:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.3419, 'learning_rate': 4.9994e-05, 'epoch': 0.02} 05/11/2024 19:11:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.3378, 'learning_rate': 4.9993e-05, 'epoch': 0.02} 05/11/2024 19:11:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.2630, 'learning_rate': 4.9992e-05, 'epoch': 0.02} 05/11/2024 19:12:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2777, 'learning_rate': 4.9991e-05, 'epoch': 0.03} 05/11/2024 19:12:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.3095, 'learning_rate': 4.9989e-05, 'epoch': 0.03} 05/11/2024 19:12:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.3788, 'learning_rate': 4.9988e-05, 'epoch': 0.03} 05/11/2024 19:12:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.3818, 'learning_rate': 4.9987e-05, 'epoch': 0.03} 05/11/2024 19:12:41 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-100 05/11/2024 19:12:42 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:12:42 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:12:42 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-100/tokenizer_config.json 05/11/2024 19:12:42 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-100/special_tokens_map.json 05/11/2024 19:12:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.2638, 'learning_rate': 4.9986e-05, 'epoch': 0.03} 05/11/2024 19:13:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1706, 'learning_rate': 4.9984e-05, 'epoch': 0.03} 05/11/2024 19:13:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2870, 'learning_rate': 4.9983e-05, 'epoch': 0.04} 05/11/2024 19:13:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1886, 'learning_rate': 4.9981e-05, 'epoch': 0.04} 05/11/2024 19:13:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2522, 'learning_rate': 4.9980e-05, 'epoch': 0.04} 05/11/2024 19:13:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2879, 'learning_rate': 4.9978e-05, 'epoch': 0.04} 05/11/2024 19:13:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.3098, 'learning_rate': 4.9976e-05, 'epoch': 0.04} 05/11/2024 19:14:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2395, 'learning_rate': 4.9975e-05, 'epoch': 0.04} 05/11/2024 19:14:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2825, 'learning_rate': 4.9973e-05, 'epoch': 0.04} 05/11/2024 19:14:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1966, 'learning_rate': 4.9971e-05, 'epoch': 0.05} 05/11/2024 19:14:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.2136, 'learning_rate': 4.9969e-05, 'epoch': 0.05} 05/11/2024 19:14:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2735, 'learning_rate': 4.9967e-05, 'epoch': 0.05} 05/11/2024 19:14:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.3386, 'learning_rate': 4.9965e-05, 'epoch': 0.05} 05/11/2024 19:15:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2693, 'learning_rate': 4.9963e-05, 'epoch': 0.05} 05/11/2024 19:15:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1652, 'learning_rate': 4.9960e-05, 'epoch': 0.05} 05/11/2024 19:15:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.3028, 'learning_rate': 4.9958e-05, 'epoch': 0.06} 05/11/2024 19:15:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1635, 'learning_rate': 4.9956e-05, 'epoch': 0.06} 05/11/2024 19:15:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.4032, 'learning_rate': 4.9953e-05, 'epoch': 0.06} 05/11/2024 19:15:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2417, 'learning_rate': 4.9951e-05, 'epoch': 0.06} 05/11/2024 19:16:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2683, 'learning_rate': 4.9948e-05, 'epoch': 0.06} 05/11/2024 19:16:08 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-200 05/11/2024 19:16:09 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:16:09 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:16:09 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-200/tokenizer_config.json 05/11/2024 19:16:09 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-200/special_tokens_map.json 05/11/2024 19:16:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2520, 'learning_rate': 4.9945e-05, 'epoch': 0.06} 05/11/2024 19:16:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2303, 'learning_rate': 4.9943e-05, 'epoch': 0.06} 05/11/2024 19:16:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.3052, 'learning_rate': 4.9940e-05, 'epoch': 0.07} 05/11/2024 19:16:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.3012, 'learning_rate': 4.9937e-05, 'epoch': 0.07} 05/11/2024 19:17:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1937, 'learning_rate': 4.9934e-05, 'epoch': 0.07} 05/11/2024 19:17:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2892, 'learning_rate': 4.9931e-05, 'epoch': 0.07} 05/11/2024 19:17:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1496, 'learning_rate': 4.9928e-05, 'epoch': 0.07} 05/11/2024 19:17:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2769, 'learning_rate': 4.9925e-05, 'epoch': 0.07} 05/11/2024 19:17:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2355, 'learning_rate': 4.9922e-05, 'epoch': 0.08} 05/11/2024 19:17:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.3105, 'learning_rate': 4.9919e-05, 'epoch': 0.08} 05/11/2024 19:18:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.3476, 'learning_rate': 4.9916e-05, 'epoch': 0.08} 05/11/2024 19:18:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1474, 'learning_rate': 4.9912e-05, 'epoch': 0.08} 05/11/2024 19:18:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1932, 'learning_rate': 4.9909e-05, 'epoch': 0.08} 05/11/2024 19:18:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2814, 'learning_rate': 4.9905e-05, 'epoch': 0.08} 05/11/2024 19:18:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.3370, 'learning_rate': 4.9902e-05, 'epoch': 0.08} 05/11/2024 19:18:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1444, 'learning_rate': 4.9898e-05, 'epoch': 0.09} 05/11/2024 19:19:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.3671, 'learning_rate': 4.9895e-05, 'epoch': 0.09} 05/11/2024 19:19:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1663, 'learning_rate': 4.9891e-05, 'epoch': 0.09} 05/11/2024 19:19:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2205, 'learning_rate': 4.9887e-05, 'epoch': 0.09} 05/11/2024 19:19:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.3335, 'learning_rate': 4.9883e-05, 'epoch': 0.09} 05/11/2024 19:19:39 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-300 05/11/2024 19:19:39 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:19:39 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:19:39 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-300/tokenizer_config.json 05/11/2024 19:19:39 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-300/special_tokens_map.json 05/11/2024 19:19:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.3269, 'learning_rate': 4.9879e-05, 'epoch': 0.09} 05/11/2024 19:20:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2983, 'learning_rate': 4.9875e-05, 'epoch': 0.10} 05/11/2024 19:20:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2401, 'learning_rate': 4.9871e-05, 'epoch': 0.10} 05/11/2024 19:20:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1723, 'learning_rate': 4.9867e-05, 'epoch': 0.10} 05/11/2024 19:20:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2318, 'learning_rate': 4.9863e-05, 'epoch': 0.10} 05/11/2024 19:20:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2859, 'learning_rate': 4.9859e-05, 'epoch': 0.10} 05/11/2024 19:20:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2600, 'learning_rate': 4.9854e-05, 'epoch': 0.10} 05/11/2024 19:21:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1915, 'learning_rate': 4.9850e-05, 'epoch': 0.10} 05/11/2024 19:21:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2653, 'learning_rate': 4.9846e-05, 'epoch': 0.11} 05/11/2024 19:21:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1685, 'learning_rate': 4.9841e-05, 'epoch': 0.11} 05/11/2024 19:21:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1732, 'learning_rate': 4.9837e-05, 'epoch': 0.11} 05/11/2024 19:21:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1440, 'learning_rate': 4.9832e-05, 'epoch': 0.11} 05/11/2024 19:21:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1434, 'learning_rate': 4.9827e-05, 'epoch': 0.11} 05/11/2024 19:22:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.2149, 'learning_rate': 4.9823e-05, 'epoch': 0.11} 05/11/2024 19:22:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1748, 'learning_rate': 4.9818e-05, 'epoch': 0.12} 05/11/2024 19:22:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2767, 'learning_rate': 4.9813e-05, 'epoch': 0.12} 05/11/2024 19:22:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2229, 'learning_rate': 4.9808e-05, 'epoch': 0.12} 05/11/2024 19:22:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1892, 'learning_rate': 4.9803e-05, 'epoch': 0.12} 05/11/2024 19:22:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2199, 'learning_rate': 4.9798e-05, 'epoch': 0.12} 05/11/2024 19:23:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1670, 'learning_rate': 4.9793e-05, 'epoch': 0.12} 05/11/2024 19:23:06 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-400 05/11/2024 19:23:07 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:23:07 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:23:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-400/tokenizer_config.json 05/11/2024 19:23:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-400/special_tokens_map.json 05/11/2024 19:23:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1379, 'learning_rate': 4.9787e-05, 'epoch': 0.12} 05/11/2024 19:23:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2004, 'learning_rate': 4.9782e-05, 'epoch': 0.13} 05/11/2024 19:23:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1749, 'learning_rate': 4.9777e-05, 'epoch': 0.13} 05/11/2024 19:23:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1946, 'learning_rate': 4.9771e-05, 'epoch': 0.13} 05/11/2024 19:23:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1884, 'learning_rate': 4.9766e-05, 'epoch': 0.13} 05/11/2024 19:24:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1723, 'learning_rate': 4.9760e-05, 'epoch': 0.13} 05/11/2024 19:24:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.2658, 'learning_rate': 4.9755e-05, 'epoch': 0.13} 05/11/2024 19:24:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2043, 'learning_rate': 4.9749e-05, 'epoch': 0.14} 05/11/2024 19:24:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2228, 'learning_rate': 4.9743e-05, 'epoch': 0.14} 05/11/2024 19:24:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0853, 'learning_rate': 4.9738e-05, 'epoch': 0.14} 05/11/2024 19:24:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1649, 'learning_rate': 4.9732e-05, 'epoch': 0.14} 05/11/2024 19:25:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1394, 'learning_rate': 4.9726e-05, 'epoch': 0.14} 05/11/2024 19:25:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1830, 'learning_rate': 4.9720e-05, 'epoch': 0.14} 05/11/2024 19:25:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0956, 'learning_rate': 4.9714e-05, 'epoch': 0.14} 05/11/2024 19:25:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2898, 'learning_rate': 4.9708e-05, 'epoch': 0.15} 05/11/2024 19:25:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2317, 'learning_rate': 4.9702e-05, 'epoch': 0.15} 05/11/2024 19:25:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.2337, 'learning_rate': 4.9695e-05, 'epoch': 0.15} 05/11/2024 19:26:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1960, 'learning_rate': 4.9689e-05, 'epoch': 0.15} 05/11/2024 19:26:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1881, 'learning_rate': 4.9683e-05, 'epoch': 0.15} 05/11/2024 19:26:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1329, 'learning_rate': 4.9676e-05, 'epoch': 0.15} 05/11/2024 19:26:31 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-500 05/11/2024 19:26:31 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:26:31 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:26:31 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-500/tokenizer_config.json 05/11/2024 19:26:31 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-500/special_tokens_map.json 05/11/2024 19:26:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1875, 'learning_rate': 4.9670e-05, 'epoch': 0.16} 05/11/2024 19:26:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.2321, 'learning_rate': 4.9663e-05, 'epoch': 0.16} 05/11/2024 19:27:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.3752, 'learning_rate': 4.9657e-05, 'epoch': 0.16} 05/11/2024 19:27:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.3080, 'learning_rate': 4.9650e-05, 'epoch': 0.16} 05/11/2024 19:27:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1672, 'learning_rate': 4.9643e-05, 'epoch': 0.16} 05/11/2024 19:27:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1813, 'learning_rate': 4.9636e-05, 'epoch': 0.16} 05/11/2024 19:27:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2240, 'learning_rate': 4.9629e-05, 'epoch': 0.16} 05/11/2024 19:27:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2160, 'learning_rate': 4.9623e-05, 'epoch': 0.17} 05/11/2024 19:28:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2665, 'learning_rate': 4.9616e-05, 'epoch': 0.17} 05/11/2024 19:28:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2242, 'learning_rate': 4.9608e-05, 'epoch': 0.17} 05/11/2024 19:28:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0902, 'learning_rate': 4.9601e-05, 'epoch': 0.17} 05/11/2024 19:28:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2584, 'learning_rate': 4.9594e-05, 'epoch': 0.17} 05/11/2024 19:28:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2759, 'learning_rate': 4.9587e-05, 'epoch': 0.17} 05/11/2024 19:29:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1871, 'learning_rate': 4.9580e-05, 'epoch': 0.18} 05/11/2024 19:29:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2548, 'learning_rate': 4.9572e-05, 'epoch': 0.18} 05/11/2024 19:29:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1955, 'learning_rate': 4.9565e-05, 'epoch': 0.18} 05/11/2024 19:29:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1725, 'learning_rate': 4.9557e-05, 'epoch': 0.18} 05/11/2024 19:29:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1608, 'learning_rate': 4.9550e-05, 'epoch': 0.18} 05/11/2024 19:29:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.2638, 'learning_rate': 4.9542e-05, 'epoch': 0.18} 05/11/2024 19:30:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1962, 'learning_rate': 4.9534e-05, 'epoch': 0.18} 05/11/2024 19:30:05 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-600 05/11/2024 19:30:06 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:30:06 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:30:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-600/tokenizer_config.json 05/11/2024 19:30:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-600/special_tokens_map.json 05/11/2024 19:30:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2266, 'learning_rate': 4.9526e-05, 'epoch': 0.19} 05/11/2024 19:30:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2421, 'learning_rate': 4.9519e-05, 'epoch': 0.19} 05/11/2024 19:30:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0911, 'learning_rate': 4.9511e-05, 'epoch': 0.19} 05/11/2024 19:30:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2937, 'learning_rate': 4.9503e-05, 'epoch': 0.19} 05/11/2024 19:30:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2341, 'learning_rate': 4.9495e-05, 'epoch': 0.19} 05/11/2024 19:31:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1867, 'learning_rate': 4.9487e-05, 'epoch': 0.19} 05/11/2024 19:31:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2039, 'learning_rate': 4.9479e-05, 'epoch': 0.20} 05/11/2024 19:31:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1598, 'learning_rate': 4.9470e-05, 'epoch': 0.20} 05/11/2024 19:31:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2351, 'learning_rate': 4.9462e-05, 'epoch': 0.20} 05/11/2024 19:31:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.3258, 'learning_rate': 4.9454e-05, 'epoch': 0.20} 05/11/2024 19:32:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2972, 'learning_rate': 4.9445e-05, 'epoch': 0.20} 05/11/2024 19:32:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1798, 'learning_rate': 4.9437e-05, 'epoch': 0.20} 05/11/2024 19:32:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.2289, 'learning_rate': 4.9428e-05, 'epoch': 0.20} 05/11/2024 19:32:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.2983, 'learning_rate': 4.9420e-05, 'epoch': 0.21} 05/11/2024 19:32:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1431, 'learning_rate': 4.9411e-05, 'epoch': 0.21} 05/11/2024 19:32:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1139, 'learning_rate': 4.9402e-05, 'epoch': 0.21} 05/11/2024 19:33:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1983, 'learning_rate': 4.9394e-05, 'epoch': 0.21} 05/11/2024 19:33:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1707, 'learning_rate': 4.9385e-05, 'epoch': 0.21} 05/11/2024 19:33:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.2633, 'learning_rate': 4.9376e-05, 'epoch': 0.21} 05/11/2024 19:33:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1424, 'learning_rate': 4.9367e-05, 'epoch': 0.22} 05/11/2024 19:33:33 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-700 05/11/2024 19:33:34 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:33:34 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:33:34 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-700/tokenizer_config.json 05/11/2024 19:33:34 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-700/special_tokens_map.json 05/11/2024 19:33:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2006, 'learning_rate': 4.9358e-05, 'epoch': 0.22} 05/11/2024 19:33:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.3100, 'learning_rate': 4.9349e-05, 'epoch': 0.22} 05/11/2024 19:34:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2209, 'learning_rate': 4.9339e-05, 'epoch': 0.22} 05/11/2024 19:34:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2628, 'learning_rate': 4.9330e-05, 'epoch': 0.22} 05/11/2024 19:34:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2771, 'learning_rate': 4.9321e-05, 'epoch': 0.22} 05/11/2024 19:34:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2315, 'learning_rate': 4.9312e-05, 'epoch': 0.22} 05/11/2024 19:34:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1513, 'learning_rate': 4.9302e-05, 'epoch': 0.23} 05/11/2024 19:34:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.2085, 'learning_rate': 4.9293e-05, 'epoch': 0.23} 05/11/2024 19:35:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.2173, 'learning_rate': 4.9283e-05, 'epoch': 0.23} 05/11/2024 19:35:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2868, 'learning_rate': 4.9274e-05, 'epoch': 0.23} 05/11/2024 19:35:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2541, 'learning_rate': 4.9264e-05, 'epoch': 0.23} 05/11/2024 19:35:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2301, 'learning_rate': 4.9254e-05, 'epoch': 0.23} 05/11/2024 19:35:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2235, 'learning_rate': 4.9244e-05, 'epoch': 0.24} 05/11/2024 19:36:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2173, 'learning_rate': 4.9234e-05, 'epoch': 0.24} 05/11/2024 19:36:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2161, 'learning_rate': 4.9225e-05, 'epoch': 0.24} 05/11/2024 19:36:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1367, 'learning_rate': 4.9215e-05, 'epoch': 0.24} 05/11/2024 19:36:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2063, 'learning_rate': 4.9205e-05, 'epoch': 0.24} 05/11/2024 19:36:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.2129, 'learning_rate': 4.9194e-05, 'epoch': 0.24} 05/11/2024 19:36:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1697, 'learning_rate': 4.9184e-05, 'epoch': 0.24} 05/11/2024 19:37:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1384, 'learning_rate': 4.9174e-05, 'epoch': 0.25} 05/11/2024 19:37:02 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-800 05/11/2024 19:37:03 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:37:03 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:37:03 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-800/tokenizer_config.json 05/11/2024 19:37:03 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-800/special_tokens_map.json 05/11/2024 19:37:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1691, 'learning_rate': 4.9164e-05, 'epoch': 0.25} 05/11/2024 19:37:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1847, 'learning_rate': 4.9153e-05, 'epoch': 0.25} 05/11/2024 19:37:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1932, 'learning_rate': 4.9143e-05, 'epoch': 0.25} 05/11/2024 19:37:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2509, 'learning_rate': 4.9132e-05, 'epoch': 0.25} 05/11/2024 19:37:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2265, 'learning_rate': 4.9122e-05, 'epoch': 0.25} 05/11/2024 19:38:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2083, 'learning_rate': 4.9111e-05, 'epoch': 0.26} 05/11/2024 19:38:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0772, 'learning_rate': 4.9101e-05, 'epoch': 0.26} 05/11/2024 19:38:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1247, 'learning_rate': 4.9090e-05, 'epoch': 0.26} 05/11/2024 19:38:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2317, 'learning_rate': 4.9079e-05, 'epoch': 0.26} 05/11/2024 19:38:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2299, 'learning_rate': 4.9068e-05, 'epoch': 0.26} 05/11/2024 19:38:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1717, 'learning_rate': 4.9057e-05, 'epoch': 0.26} 05/11/2024 19:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1482, 'learning_rate': 4.9046e-05, 'epoch': 0.26} 05/11/2024 19:39:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1459, 'learning_rate': 4.9035e-05, 'epoch': 0.27} 05/11/2024 19:39:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1814, 'learning_rate': 4.9024e-05, 'epoch': 0.27} 05/11/2024 19:39:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1091, 'learning_rate': 4.9013e-05, 'epoch': 0.27} 05/11/2024 19:39:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1259, 'learning_rate': 4.9002e-05, 'epoch': 0.27} 05/11/2024 19:40:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2270, 'learning_rate': 4.8990e-05, 'epoch': 0.27} 05/11/2024 19:40:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2486, 'learning_rate': 4.8979e-05, 'epoch': 0.27} 05/11/2024 19:40:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.2767, 'learning_rate': 4.8968e-05, 'epoch': 0.28} 05/11/2024 19:40:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1728, 'learning_rate': 4.8956e-05, 'epoch': 0.28} 05/11/2024 19:40:31 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-900 05/11/2024 19:40:32 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:40:32 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:40:32 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-900/tokenizer_config.json 05/11/2024 19:40:32 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-900/special_tokens_map.json 05/11/2024 19:40:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0947, 'learning_rate': 4.8945e-05, 'epoch': 0.28} 05/11/2024 19:40:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1474, 'learning_rate': 4.8933e-05, 'epoch': 0.28} 05/11/2024 19:41:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1966, 'learning_rate': 4.8921e-05, 'epoch': 0.28} 05/11/2024 19:41:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1940, 'learning_rate': 4.8910e-05, 'epoch': 0.28} 05/11/2024 19:41:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1109, 'learning_rate': 4.8898e-05, 'epoch': 0.28} 05/11/2024 19:41:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1817, 'learning_rate': 4.8886e-05, 'epoch': 0.29} 05/11/2024 19:41:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1885, 'learning_rate': 4.8874e-05, 'epoch': 0.29} 05/11/2024 19:41:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1094, 'learning_rate': 4.8862e-05, 'epoch': 0.29} 05/11/2024 19:42:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1363, 'learning_rate': 4.8850e-05, 'epoch': 0.29} 05/11/2024 19:42:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1993, 'learning_rate': 4.8838e-05, 'epoch': 0.29} 05/11/2024 19:42:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.2974, 'learning_rate': 4.8826e-05, 'epoch': 0.29} 05/11/2024 19:42:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0904, 'learning_rate': 4.8813e-05, 'epoch': 0.30} 05/11/2024 19:42:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1945, 'learning_rate': 4.8801e-05, 'epoch': 0.30} 05/11/2024 19:42:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1332, 'learning_rate': 4.8789e-05, 'epoch': 0.30} 05/11/2024 19:43:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1619, 'learning_rate': 4.8776e-05, 'epoch': 0.30} 05/11/2024 19:43:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2385, 'learning_rate': 4.8764e-05, 'epoch': 0.30} 05/11/2024 19:43:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1853, 'learning_rate': 4.8751e-05, 'epoch': 0.30} 05/11/2024 19:43:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1700, 'learning_rate': 4.8739e-05, 'epoch': 0.30} 05/11/2024 19:43:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1836, 'learning_rate': 4.8726e-05, 'epoch': 0.31} 05/11/2024 19:43:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0884, 'learning_rate': 4.8713e-05, 'epoch': 0.31} 05/11/2024 19:43:55 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1000 05/11/2024 19:43:55 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:43:55 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:43:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1000/tokenizer_config.json 05/11/2024 19:43:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1000/special_tokens_map.json 05/11/2024 19:44:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.2415, 'learning_rate': 4.8701e-05, 'epoch': 0.31} 05/11/2024 19:44:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1733, 'learning_rate': 4.8688e-05, 'epoch': 0.31} 05/11/2024 19:44:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2495, 'learning_rate': 4.8675e-05, 'epoch': 0.31} 05/11/2024 19:44:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2221, 'learning_rate': 4.8662e-05, 'epoch': 0.31} 05/11/2024 19:44:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2491, 'learning_rate': 4.8649e-05, 'epoch': 0.32} 05/11/2024 19:44:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2308, 'learning_rate': 4.8636e-05, 'epoch': 0.32} 05/11/2024 19:45:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1884, 'learning_rate': 4.8623e-05, 'epoch': 0.32} 05/11/2024 19:45:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0993, 'learning_rate': 4.8609e-05, 'epoch': 0.32} 05/11/2024 19:45:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2670, 'learning_rate': 4.8596e-05, 'epoch': 0.32} 05/11/2024 19:45:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1850, 'learning_rate': 4.8583e-05, 'epoch': 0.32} 05/11/2024 19:45:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1568, 'learning_rate': 4.8569e-05, 'epoch': 0.32} 05/11/2024 19:45:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.9792, 'learning_rate': 4.8556e-05, 'epoch': 0.33} 05/11/2024 19:46:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0783, 'learning_rate': 4.8542e-05, 'epoch': 0.33} 05/11/2024 19:46:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1971, 'learning_rate': 4.8529e-05, 'epoch': 0.33} 05/11/2024 19:46:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1928, 'learning_rate': 4.8515e-05, 'epoch': 0.33} 05/11/2024 19:46:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2081, 'learning_rate': 4.8501e-05, 'epoch': 0.33} 05/11/2024 19:46:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1119, 'learning_rate': 4.8488e-05, 'epoch': 0.33} 05/11/2024 19:47:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2201, 'learning_rate': 4.8474e-05, 'epoch': 0.34} 05/11/2024 19:47:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2032, 'learning_rate': 4.8460e-05, 'epoch': 0.34} 05/11/2024 19:47:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0737, 'learning_rate': 4.8446e-05, 'epoch': 0.34} 05/11/2024 19:47:22 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1100 05/11/2024 19:47:23 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:47:23 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:47:23 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1100/tokenizer_config.json 05/11/2024 19:47:23 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1100/special_tokens_map.json 05/11/2024 19:47:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2315, 'learning_rate': 4.8432e-05, 'epoch': 0.34} 05/11/2024 19:47:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1993, 'learning_rate': 4.8418e-05, 'epoch': 0.34} 05/11/2024 19:47:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1318, 'learning_rate': 4.8404e-05, 'epoch': 0.34} 05/11/2024 19:48:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1404, 'learning_rate': 4.8390e-05, 'epoch': 0.34} 05/11/2024 19:48:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0803, 'learning_rate': 4.8375e-05, 'epoch': 0.35} 05/11/2024 19:48:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2123, 'learning_rate': 4.8361e-05, 'epoch': 0.35} 05/11/2024 19:48:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1247, 'learning_rate': 4.8347e-05, 'epoch': 0.35} 05/11/2024 19:48:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1716, 'learning_rate': 4.8332e-05, 'epoch': 0.35} 05/11/2024 19:48:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.3617, 'learning_rate': 4.8318e-05, 'epoch': 0.35} 05/11/2024 19:49:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1845, 'learning_rate': 4.8303e-05, 'epoch': 0.35} 05/11/2024 19:49:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1945, 'learning_rate': 4.8289e-05, 'epoch': 0.36} 05/11/2024 19:49:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.2021, 'learning_rate': 4.8274e-05, 'epoch': 0.36} 05/11/2024 19:49:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1994, 'learning_rate': 4.8259e-05, 'epoch': 0.36} 05/11/2024 19:49:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1749, 'learning_rate': 4.8244e-05, 'epoch': 0.36} 05/11/2024 19:50:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0968, 'learning_rate': 4.8230e-05, 'epoch': 0.36} 05/11/2024 19:50:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1396, 'learning_rate': 4.8215e-05, 'epoch': 0.36} 05/11/2024 19:50:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2997, 'learning_rate': 4.8200e-05, 'epoch': 0.36} 05/11/2024 19:50:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2487, 'learning_rate': 4.8185e-05, 'epoch': 0.37} 05/11/2024 19:50:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1749, 'learning_rate': 4.8170e-05, 'epoch': 0.37} 05/11/2024 19:50:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2342, 'learning_rate': 4.8154e-05, 'epoch': 0.37} 05/11/2024 19:50:55 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1200 05/11/2024 19:50:56 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:50:56 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:50:56 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1200/tokenizer_config.json 05/11/2024 19:50:56 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1200/special_tokens_map.json 05/11/2024 19:51:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1538, 'learning_rate': 4.8139e-05, 'epoch': 0.37} 05/11/2024 19:51:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0581, 'learning_rate': 4.8124e-05, 'epoch': 0.37} 05/11/2024 19:51:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1604, 'learning_rate': 4.8109e-05, 'epoch': 0.37} 05/11/2024 19:51:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2667, 'learning_rate': 4.8093e-05, 'epoch': 0.38} 05/11/2024 19:51:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.2328, 'learning_rate': 4.8078e-05, 'epoch': 0.38} 05/11/2024 19:51:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1748, 'learning_rate': 4.8062e-05, 'epoch': 0.38} 05/11/2024 19:52:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1827, 'learning_rate': 4.8047e-05, 'epoch': 0.38} 05/11/2024 19:52:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1902, 'learning_rate': 4.8031e-05, 'epoch': 0.38} 05/11/2024 19:52:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1679, 'learning_rate': 4.8015e-05, 'epoch': 0.38} 05/11/2024 19:52:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.2108, 'learning_rate': 4.7999e-05, 'epoch': 0.38} 05/11/2024 19:52:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2093, 'learning_rate': 4.7984e-05, 'epoch': 0.39} 05/11/2024 19:52:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1166, 'learning_rate': 4.7968e-05, 'epoch': 0.39} 05/11/2024 19:53:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1535, 'learning_rate': 4.7952e-05, 'epoch': 0.39} 05/11/2024 19:53:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2660, 'learning_rate': 4.7936e-05, 'epoch': 0.39} 05/11/2024 19:53:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2552, 'learning_rate': 4.7920e-05, 'epoch': 0.39} 05/11/2024 19:53:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.2457, 'learning_rate': 4.7904e-05, 'epoch': 0.39} 05/11/2024 19:53:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.0594, 'learning_rate': 4.7888e-05, 'epoch': 0.40} 05/11/2024 19:54:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1454, 'learning_rate': 4.7871e-05, 'epoch': 0.40} 05/11/2024 19:54:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2120, 'learning_rate': 4.7855e-05, 'epoch': 0.40} 05/11/2024 19:54:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2743, 'learning_rate': 4.7839e-05, 'epoch': 0.40} 05/11/2024 19:54:19 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1300 05/11/2024 19:54:20 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:54:20 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:54:20 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1300/tokenizer_config.json 05/11/2024 19:54:20 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1300/special_tokens_map.json 05/11/2024 19:54:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1012, 'learning_rate': 4.7822e-05, 'epoch': 0.40} 05/11/2024 19:54:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0773, 'learning_rate': 4.7806e-05, 'epoch': 0.40} 05/11/2024 19:54:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.3538, 'learning_rate': 4.7789e-05, 'epoch': 0.40} 05/11/2024 19:55:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1075, 'learning_rate': 4.7773e-05, 'epoch': 0.41} 05/11/2024 19:55:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2541, 'learning_rate': 4.7756e-05, 'epoch': 0.41} 05/11/2024 19:55:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1718, 'learning_rate': 4.7739e-05, 'epoch': 0.41} 05/11/2024 19:55:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.3156, 'learning_rate': 4.7723e-05, 'epoch': 0.41} 05/11/2024 19:55:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2319, 'learning_rate': 4.7706e-05, 'epoch': 0.41} 05/11/2024 19:55:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1831, 'learning_rate': 4.7689e-05, 'epoch': 0.41} 05/11/2024 19:56:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1658, 'learning_rate': 4.7672e-05, 'epoch': 0.42} 05/11/2024 19:56:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1391, 'learning_rate': 4.7655e-05, 'epoch': 0.42} 05/11/2024 19:56:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1059, 'learning_rate': 4.7638e-05, 'epoch': 0.42} 05/11/2024 19:56:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1448, 'learning_rate': 4.7621e-05, 'epoch': 0.42} 05/11/2024 19:56:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1634, 'learning_rate': 4.7603e-05, 'epoch': 0.42} 05/11/2024 19:56:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1911, 'learning_rate': 4.7586e-05, 'epoch': 0.42} 05/11/2024 19:57:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1134, 'learning_rate': 4.7572e-05, 'epoch': 0.42} 05/11/2024 19:57:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.2352, 'learning_rate': 4.7555e-05, 'epoch': 0.43} 05/11/2024 19:57:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1271, 'learning_rate': 4.7538e-05, 'epoch': 0.43} 05/11/2024 19:57:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1633, 'learning_rate': 4.7520e-05, 'epoch': 0.43} 05/11/2024 19:57:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0869, 'learning_rate': 4.7503e-05, 'epoch': 0.43} 05/11/2024 19:57:51 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1400 05/11/2024 19:57:52 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 19:57:52 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 19:57:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1400/tokenizer_config.json 05/11/2024 19:57:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1400/special_tokens_map.json 05/11/2024 19:58:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.3126, 'learning_rate': 4.7485e-05, 'epoch': 0.43} 05/11/2024 19:58:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2092, 'learning_rate': 4.7467e-05, 'epoch': 0.43} 05/11/2024 19:58:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2647, 'learning_rate': 4.7450e-05, 'epoch': 0.44} 05/11/2024 19:58:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2187, 'learning_rate': 4.7432e-05, 'epoch': 0.44} 05/11/2024 19:58:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2245, 'learning_rate': 4.7414e-05, 'epoch': 0.44} 05/11/2024 19:58:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2666, 'learning_rate': 4.7396e-05, 'epoch': 0.44} 05/11/2024 19:59:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0815, 'learning_rate': 4.7378e-05, 'epoch': 0.44} 05/11/2024 19:59:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2059, 'learning_rate': 4.7360e-05, 'epoch': 0.44} 05/11/2024 19:59:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.3611, 'learning_rate': 4.7342e-05, 'epoch': 0.44} 05/11/2024 19:59:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1654, 'learning_rate': 4.7324e-05, 'epoch': 0.45} 05/11/2024 19:59:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1962, 'learning_rate': 4.7306e-05, 'epoch': 0.45} 05/11/2024 19:59:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2541, 'learning_rate': 4.7288e-05, 'epoch': 0.45} 05/11/2024 20:00:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1695, 'learning_rate': 4.7270e-05, 'epoch': 0.45} 05/11/2024 20:00:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.2109, 'learning_rate': 4.7251e-05, 'epoch': 0.45} 05/11/2024 20:00:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1345, 'learning_rate': 4.7233e-05, 'epoch': 0.45} 05/11/2024 20:00:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1668, 'learning_rate': 4.7215e-05, 'epoch': 0.46} 05/11/2024 20:00:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2392, 'learning_rate': 4.7196e-05, 'epoch': 0.46} 05/11/2024 20:01:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2615, 'learning_rate': 4.7177e-05, 'epoch': 0.46} 05/11/2024 20:01:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.3384, 'learning_rate': 4.7159e-05, 'epoch': 0.46} 05/11/2024 20:01:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2788, 'learning_rate': 4.7140e-05, 'epoch': 0.46} 05/11/2024 20:01:22 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1500 05/11/2024 20:01:23 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:01:23 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:01:23 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1500/tokenizer_config.json 05/11/2024 20:01:23 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1500/special_tokens_map.json 05/11/2024 20:01:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1502, 'learning_rate': 4.7121e-05, 'epoch': 0.46} 05/11/2024 20:01:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1128, 'learning_rate': 4.7103e-05, 'epoch': 0.46} 05/11/2024 20:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1704, 'learning_rate': 4.7084e-05, 'epoch': 0.47} 05/11/2024 20:02:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1092, 'learning_rate': 4.7065e-05, 'epoch': 0.47} 05/11/2024 20:02:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1372, 'learning_rate': 4.7046e-05, 'epoch': 0.47} 05/11/2024 20:02:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0919, 'learning_rate': 4.7027e-05, 'epoch': 0.47} 05/11/2024 20:02:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2167, 'learning_rate': 4.7008e-05, 'epoch': 0.47} 05/11/2024 20:02:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2096, 'learning_rate': 4.6989e-05, 'epoch': 0.47} 05/11/2024 20:02:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1183, 'learning_rate': 4.6969e-05, 'epoch': 0.48} 05/11/2024 20:03:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2528, 'learning_rate': 4.6950e-05, 'epoch': 0.48} 05/11/2024 20:03:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1355, 'learning_rate': 4.6931e-05, 'epoch': 0.48} 05/11/2024 20:03:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1942, 'learning_rate': 4.6912e-05, 'epoch': 0.48} 05/11/2024 20:03:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1756, 'learning_rate': 4.6892e-05, 'epoch': 0.48} 05/11/2024 20:03:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1363, 'learning_rate': 4.6873e-05, 'epoch': 0.48} 05/11/2024 20:03:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1298, 'learning_rate': 4.6853e-05, 'epoch': 0.48} 05/11/2024 20:04:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0793, 'learning_rate': 4.6834e-05, 'epoch': 0.49} 05/11/2024 20:04:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1616, 'learning_rate': 4.6814e-05, 'epoch': 0.49} 05/11/2024 20:04:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2206, 'learning_rate': 4.6794e-05, 'epoch': 0.49} 05/11/2024 20:04:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1871, 'learning_rate': 4.6774e-05, 'epoch': 0.49} 05/11/2024 20:04:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2779, 'learning_rate': 4.6755e-05, 'epoch': 0.49} 05/11/2024 20:04:48 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1600 05/11/2024 20:04:49 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:04:49 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:04:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1600/tokenizer_config.json 05/11/2024 20:04:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1600/special_tokens_map.json 05/11/2024 20:04:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.0949, 'learning_rate': 4.6735e-05, 'epoch': 0.49} 05/11/2024 20:05:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2541, 'learning_rate': 4.6715e-05, 'epoch': 0.50} 05/11/2024 20:05:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1915, 'learning_rate': 4.6695e-05, 'epoch': 0.50} 05/11/2024 20:05:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1954, 'learning_rate': 4.6675e-05, 'epoch': 0.50} 05/11/2024 20:05:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1630, 'learning_rate': 4.6655e-05, 'epoch': 0.50} 05/11/2024 20:05:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.2891, 'learning_rate': 4.6635e-05, 'epoch': 0.50} 05/11/2024 20:06:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2295, 'learning_rate': 4.6614e-05, 'epoch': 0.50} 05/11/2024 20:06:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0742, 'learning_rate': 4.6594e-05, 'epoch': 0.50} 05/11/2024 20:06:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1575, 'learning_rate': 4.6574e-05, 'epoch': 0.51} 05/11/2024 20:06:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2364, 'learning_rate': 4.6553e-05, 'epoch': 0.51} 05/11/2024 20:06:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.3216, 'learning_rate': 4.6533e-05, 'epoch': 0.51} 05/11/2024 20:06:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1948, 'learning_rate': 4.6512e-05, 'epoch': 0.51} 05/11/2024 20:07:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1198, 'learning_rate': 4.6492e-05, 'epoch': 0.51} 05/11/2024 20:07:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1055, 'learning_rate': 4.6471e-05, 'epoch': 0.51} 05/11/2024 20:07:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1013, 'learning_rate': 4.6451e-05, 'epoch': 0.52} 05/11/2024 20:07:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2337, 'learning_rate': 4.6430e-05, 'epoch': 0.52} 05/11/2024 20:07:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1338, 'learning_rate': 4.6409e-05, 'epoch': 0.52} 05/11/2024 20:07:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2974, 'learning_rate': 4.6388e-05, 'epoch': 0.52} 05/11/2024 20:08:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0235, 'learning_rate': 4.6367e-05, 'epoch': 0.52} 05/11/2024 20:08:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.3254, 'learning_rate': 4.6346e-05, 'epoch': 0.52} 05/11/2024 20:08:18 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1700 05/11/2024 20:08:19 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:08:19 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:08:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1700/tokenizer_config.json 05/11/2024 20:08:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1700/special_tokens_map.json 05/11/2024 20:08:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2105, 'learning_rate': 4.6325e-05, 'epoch': 0.52} 05/11/2024 20:08:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2339, 'learning_rate': 4.6304e-05, 'epoch': 0.53} 05/11/2024 20:08:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1655, 'learning_rate': 4.6283e-05, 'epoch': 0.53} 05/11/2024 20:09:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1398, 'learning_rate': 4.6262e-05, 'epoch': 0.53} 05/11/2024 20:09:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1550, 'learning_rate': 4.6241e-05, 'epoch': 0.53} 05/11/2024 20:09:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1620, 'learning_rate': 4.6220e-05, 'epoch': 0.53} 05/11/2024 20:09:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.2505, 'learning_rate': 4.6198e-05, 'epoch': 0.53} 05/11/2024 20:09:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.2112, 'learning_rate': 4.6177e-05, 'epoch': 0.54} 05/11/2024 20:09:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1836, 'learning_rate': 4.6156e-05, 'epoch': 0.54} 05/11/2024 20:10:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1892, 'learning_rate': 4.6134e-05, 'epoch': 0.54} 05/11/2024 20:10:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1996, 'learning_rate': 4.6113e-05, 'epoch': 0.54} 05/11/2024 20:10:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1016, 'learning_rate': 4.6091e-05, 'epoch': 0.54} 05/11/2024 20:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2330, 'learning_rate': 4.6069e-05, 'epoch': 0.54} 05/11/2024 20:10:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2789, 'learning_rate': 4.6048e-05, 'epoch': 0.54} 05/11/2024 20:10:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.3355, 'learning_rate': 4.6026e-05, 'epoch': 0.55} 05/11/2024 20:11:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0967, 'learning_rate': 4.6004e-05, 'epoch': 0.55} 05/11/2024 20:11:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0866, 'learning_rate': 4.5982e-05, 'epoch': 0.55} 05/11/2024 20:11:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2533, 'learning_rate': 4.5960e-05, 'epoch': 0.55} 05/11/2024 20:11:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2313, 'learning_rate': 4.5938e-05, 'epoch': 0.55} 05/11/2024 20:11:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1038, 'learning_rate': 4.5916e-05, 'epoch': 0.55} 05/11/2024 20:11:49 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1800 05/11/2024 20:11:49 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:11:49 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:11:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1800/tokenizer_config.json 05/11/2024 20:11:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1800/special_tokens_map.json 05/11/2024 20:11:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1897, 'learning_rate': 4.5894e-05, 'epoch': 0.56} 05/11/2024 20:12:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1637, 'learning_rate': 4.5872e-05, 'epoch': 0.56} 05/11/2024 20:12:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.3695, 'learning_rate': 4.5850e-05, 'epoch': 0.56} 05/11/2024 20:12:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1085, 'learning_rate': 4.5827e-05, 'epoch': 0.56} 05/11/2024 20:12:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2322, 'learning_rate': 4.5805e-05, 'epoch': 0.56} 05/11/2024 20:12:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0482, 'learning_rate': 4.5783e-05, 'epoch': 0.56} 05/11/2024 20:13:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1320, 'learning_rate': 4.5760e-05, 'epoch': 0.56} 05/11/2024 20:13:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2385, 'learning_rate': 4.5738e-05, 'epoch': 0.57} 05/11/2024 20:13:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2718, 'learning_rate': 4.5715e-05, 'epoch': 0.57} 05/11/2024 20:13:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1639, 'learning_rate': 4.5693e-05, 'epoch': 0.57} 05/11/2024 20:13:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2753, 'learning_rate': 4.5670e-05, 'epoch': 0.57} 05/11/2024 20:13:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2298, 'learning_rate': 4.5648e-05, 'epoch': 0.57} 05/11/2024 20:14:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1079, 'learning_rate': 4.5625e-05, 'epoch': 0.57} 05/11/2024 20:14:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1423, 'learning_rate': 4.5602e-05, 'epoch': 0.58} 05/11/2024 20:14:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.3155, 'learning_rate': 4.5579e-05, 'epoch': 0.58} 05/11/2024 20:14:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1703, 'learning_rate': 4.5556e-05, 'epoch': 0.58} 05/11/2024 20:14:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2033, 'learning_rate': 4.5533e-05, 'epoch': 0.58} 05/11/2024 20:14:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1280, 'learning_rate': 4.5510e-05, 'epoch': 0.58} 05/11/2024 20:15:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1954, 'learning_rate': 4.5487e-05, 'epoch': 0.58} 05/11/2024 20:15:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1627, 'learning_rate': 4.5464e-05, 'epoch': 0.58} 05/11/2024 20:15:17 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1900 05/11/2024 20:15:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:15:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:15:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1900/tokenizer_config.json 05/11/2024 20:15:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-1900/special_tokens_map.json 05/11/2024 20:15:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2379, 'learning_rate': 4.5441e-05, 'epoch': 0.59} 05/11/2024 20:15:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2146, 'learning_rate': 4.5418e-05, 'epoch': 0.59} 05/11/2024 20:15:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1367, 'learning_rate': 4.5395e-05, 'epoch': 0.59} 05/11/2024 20:15:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1230, 'learning_rate': 4.5371e-05, 'epoch': 0.59} 05/11/2024 20:16:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1501, 'learning_rate': 4.5348e-05, 'epoch': 0.59} 05/11/2024 20:16:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1789, 'learning_rate': 4.5324e-05, 'epoch': 0.59} 05/11/2024 20:16:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2626, 'learning_rate': 4.5301e-05, 'epoch': 0.60} 05/11/2024 20:16:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1505, 'learning_rate': 4.5277e-05, 'epoch': 0.60} 05/11/2024 20:16:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2476, 'learning_rate': 4.5254e-05, 'epoch': 0.60} 05/11/2024 20:17:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.0609, 'learning_rate': 4.5230e-05, 'epoch': 0.60} 05/11/2024 20:17:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1339, 'learning_rate': 4.5206e-05, 'epoch': 0.60} 05/11/2024 20:17:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.0505, 'learning_rate': 4.5183e-05, 'epoch': 0.60} 05/11/2024 20:17:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1124, 'learning_rate': 4.5159e-05, 'epoch': 0.60} 05/11/2024 20:17:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1829, 'learning_rate': 4.5135e-05, 'epoch': 0.61} 05/11/2024 20:17:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1963, 'learning_rate': 4.5111e-05, 'epoch': 0.61} 05/11/2024 20:18:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2722, 'learning_rate': 4.5087e-05, 'epoch': 0.61} 05/11/2024 20:18:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.3127, 'learning_rate': 4.5063e-05, 'epoch': 0.61} 05/11/2024 20:18:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.2423, 'learning_rate': 4.5039e-05, 'epoch': 0.61} 05/11/2024 20:18:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1971, 'learning_rate': 4.5015e-05, 'epoch': 0.61} 05/11/2024 20:18:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1892, 'learning_rate': 4.4991e-05, 'epoch': 0.62} 05/11/2024 20:18:41 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2000 05/11/2024 20:18:42 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:18:42 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:18:42 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2000/tokenizer_config.json 05/11/2024 20:18:42 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2000/special_tokens_map.json 05/11/2024 20:18:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0706, 'learning_rate': 4.4967e-05, 'epoch': 0.62} 05/11/2024 20:19:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2081, 'learning_rate': 4.4942e-05, 'epoch': 0.62} 05/11/2024 20:19:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1153, 'learning_rate': 4.4918e-05, 'epoch': 0.62} 05/11/2024 20:19:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2600, 'learning_rate': 4.4894e-05, 'epoch': 0.62} 05/11/2024 20:19:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1023, 'learning_rate': 4.4869e-05, 'epoch': 0.62} 05/11/2024 20:19:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1005, 'learning_rate': 4.4845e-05, 'epoch': 0.62} 05/11/2024 20:19:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.2555, 'learning_rate': 4.4820e-05, 'epoch': 0.63} 05/11/2024 20:20:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0987, 'learning_rate': 4.4796e-05, 'epoch': 0.63} 05/11/2024 20:20:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0992, 'learning_rate': 4.4771e-05, 'epoch': 0.63} 05/11/2024 20:20:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1127, 'learning_rate': 4.4746e-05, 'epoch': 0.63} 05/11/2024 20:20:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1410, 'learning_rate': 4.4722e-05, 'epoch': 0.63} 05/11/2024 20:20:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1934, 'learning_rate': 4.4697e-05, 'epoch': 0.63} 05/11/2024 20:20:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1290, 'learning_rate': 4.4672e-05, 'epoch': 0.64} 05/11/2024 20:21:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1553, 'learning_rate': 4.4647e-05, 'epoch': 0.64} 05/11/2024 20:21:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1845, 'learning_rate': 4.4622e-05, 'epoch': 0.64} 05/11/2024 20:21:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1640, 'learning_rate': 4.4597e-05, 'epoch': 0.64} 05/11/2024 20:21:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2635, 'learning_rate': 4.4572e-05, 'epoch': 0.64} 05/11/2024 20:21:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.2015, 'learning_rate': 4.4547e-05, 'epoch': 0.64} 05/11/2024 20:21:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0928, 'learning_rate': 4.4522e-05, 'epoch': 0.64} 05/11/2024 20:22:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2496, 'learning_rate': 4.4497e-05, 'epoch': 0.65} 05/11/2024 20:22:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2100 05/11/2024 20:22:07 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:22:07 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:22:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2100/tokenizer_config.json 05/11/2024 20:22:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2100/special_tokens_map.json 05/11/2024 20:22:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1230, 'learning_rate': 4.4472e-05, 'epoch': 0.65} 05/11/2024 20:22:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0909, 'learning_rate': 4.4446e-05, 'epoch': 0.65} 05/11/2024 20:22:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2750, 'learning_rate': 4.4421e-05, 'epoch': 0.65} 05/11/2024 20:22:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1814, 'learning_rate': 4.4396e-05, 'epoch': 0.65} 05/11/2024 20:23:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1717, 'learning_rate': 4.4370e-05, 'epoch': 0.65} 05/11/2024 20:23:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2006, 'learning_rate': 4.4345e-05, 'epoch': 0.66} 05/11/2024 20:23:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1600, 'learning_rate': 4.4319e-05, 'epoch': 0.66} 05/11/2024 20:23:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.3311, 'learning_rate': 4.4294e-05, 'epoch': 0.66} 05/11/2024 20:23:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2131, 'learning_rate': 4.4268e-05, 'epoch': 0.66} 05/11/2024 20:23:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1500, 'learning_rate': 4.4242e-05, 'epoch': 0.66} 05/11/2024 20:24:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2054, 'learning_rate': 4.4217e-05, 'epoch': 0.66} 05/11/2024 20:24:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.0550, 'learning_rate': 4.4191e-05, 'epoch': 0.66} 05/11/2024 20:24:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2199, 'learning_rate': 4.4165e-05, 'epoch': 0.67} 05/11/2024 20:24:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1933, 'learning_rate': 4.4139e-05, 'epoch': 0.67} 05/11/2024 20:24:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1602, 'learning_rate': 4.4113e-05, 'epoch': 0.67} 05/11/2024 20:24:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2509, 'learning_rate': 4.4087e-05, 'epoch': 0.67} 05/11/2024 20:25:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1144, 'learning_rate': 4.4061e-05, 'epoch': 0.67} 05/11/2024 20:25:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.2816, 'learning_rate': 4.4035e-05, 'epoch': 0.67} 05/11/2024 20:25:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1233, 'learning_rate': 4.4009e-05, 'epoch': 0.68} 05/11/2024 20:25:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1980, 'learning_rate': 4.3983e-05, 'epoch': 0.68} 05/11/2024 20:25:36 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2200 05/11/2024 20:25:37 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:25:37 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:25:37 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2200/tokenizer_config.json 05/11/2024 20:25:37 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2200/special_tokens_map.json 05/11/2024 20:25:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2354, 'learning_rate': 4.3956e-05, 'epoch': 0.68} 05/11/2024 20:25:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1986, 'learning_rate': 4.3930e-05, 'epoch': 0.68} 05/11/2024 20:26:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.2441, 'learning_rate': 4.3904e-05, 'epoch': 0.68} 05/11/2024 20:26:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2142, 'learning_rate': 4.3877e-05, 'epoch': 0.68} 05/11/2024 20:26:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1192, 'learning_rate': 4.3851e-05, 'epoch': 0.68} 05/11/2024 20:26:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0806, 'learning_rate': 4.3825e-05, 'epoch': 0.69} 05/11/2024 20:26:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2906, 'learning_rate': 4.3798e-05, 'epoch': 0.69} 05/11/2024 20:27:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1450, 'learning_rate': 4.3771e-05, 'epoch': 0.69} 05/11/2024 20:27:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2479, 'learning_rate': 4.3745e-05, 'epoch': 0.69} 05/11/2024 20:27:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2843, 'learning_rate': 4.3718e-05, 'epoch': 0.69} 05/11/2024 20:27:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2483, 'learning_rate': 4.3691e-05, 'epoch': 0.69} 05/11/2024 20:27:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1166, 'learning_rate': 4.3665e-05, 'epoch': 0.70} 05/11/2024 20:27:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.2809, 'learning_rate': 4.3638e-05, 'epoch': 0.70} 05/11/2024 20:28:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1957, 'learning_rate': 4.3611e-05, 'epoch': 0.70} 05/11/2024 20:28:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1612, 'learning_rate': 4.3584e-05, 'epoch': 0.70} 05/11/2024 20:28:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1595, 'learning_rate': 4.3557e-05, 'epoch': 0.70} 05/11/2024 20:28:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1202, 'learning_rate': 4.3530e-05, 'epoch': 0.70} 05/11/2024 20:28:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1749, 'learning_rate': 4.3503e-05, 'epoch': 0.70} 05/11/2024 20:28:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1464, 'learning_rate': 4.3476e-05, 'epoch': 0.71} 05/11/2024 20:29:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1479, 'learning_rate': 4.3449e-05, 'epoch': 0.71} 05/11/2024 20:29:06 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2300 05/11/2024 20:29:07 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:29:07 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:29:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2300/tokenizer_config.json 05/11/2024 20:29:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2300/special_tokens_map.json 05/11/2024 20:29:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1548, 'learning_rate': 4.3421e-05, 'epoch': 0.71} 05/11/2024 20:29:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1267, 'learning_rate': 4.3394e-05, 'epoch': 0.71} 05/11/2024 20:29:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2297, 'learning_rate': 4.3367e-05, 'epoch': 0.71} 05/11/2024 20:29:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2280, 'learning_rate': 4.3340e-05, 'epoch': 0.71} 05/11/2024 20:29:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.0902, 'learning_rate': 4.3312e-05, 'epoch': 0.72} 05/11/2024 20:30:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2011, 'learning_rate': 4.3285e-05, 'epoch': 0.72} 05/11/2024 20:30:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1082, 'learning_rate': 4.3257e-05, 'epoch': 0.72} 05/11/2024 20:30:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2202, 'learning_rate': 4.3230e-05, 'epoch': 0.72} 05/11/2024 20:30:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1947, 'learning_rate': 4.3202e-05, 'epoch': 0.72} 05/11/2024 20:30:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1544, 'learning_rate': 4.3175e-05, 'epoch': 0.72} 05/11/2024 20:31:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1858, 'learning_rate': 4.3147e-05, 'epoch': 0.72} 05/11/2024 20:31:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2452, 'learning_rate': 4.3119e-05, 'epoch': 0.73} 05/11/2024 20:31:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1731, 'learning_rate': 4.3091e-05, 'epoch': 0.73} 05/11/2024 20:31:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1717, 'learning_rate': 4.3064e-05, 'epoch': 0.73} 05/11/2024 20:31:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1315, 'learning_rate': 4.3036e-05, 'epoch': 0.73} 05/11/2024 20:31:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1341, 'learning_rate': 4.3008e-05, 'epoch': 0.73} 05/11/2024 20:32:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0577, 'learning_rate': 4.2980e-05, 'epoch': 0.73} 05/11/2024 20:32:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1672, 'learning_rate': 4.2952e-05, 'epoch': 0.74} 05/11/2024 20:32:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1087, 'learning_rate': 4.2924e-05, 'epoch': 0.74} 05/11/2024 20:32:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1708, 'learning_rate': 4.2896e-05, 'epoch': 0.74} 05/11/2024 20:32:34 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2400 05/11/2024 20:32:35 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:32:35 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:32:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2400/tokenizer_config.json 05/11/2024 20:32:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2400/special_tokens_map.json 05/11/2024 20:32:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1594, 'learning_rate': 4.2867e-05, 'epoch': 0.74} 05/11/2024 20:32:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2085, 'learning_rate': 4.2839e-05, 'epoch': 0.74} 05/11/2024 20:33:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1877, 'learning_rate': 4.2811e-05, 'epoch': 0.74} 05/11/2024 20:33:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0301, 'learning_rate': 4.2783e-05, 'epoch': 0.74} 05/11/2024 20:33:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1589, 'learning_rate': 4.2754e-05, 'epoch': 0.75} 05/11/2024 20:33:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1945, 'learning_rate': 4.2726e-05, 'epoch': 0.75} 05/11/2024 20:33:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1851, 'learning_rate': 4.2698e-05, 'epoch': 0.75} 05/11/2024 20:33:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1269, 'learning_rate': 4.2669e-05, 'epoch': 0.75} 05/11/2024 20:34:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1546, 'learning_rate': 4.2641e-05, 'epoch': 0.75} 05/11/2024 20:34:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1901, 'learning_rate': 4.2612e-05, 'epoch': 0.75} 05/11/2024 20:34:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2222, 'learning_rate': 4.2583e-05, 'epoch': 0.76} 05/11/2024 20:34:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0660, 'learning_rate': 4.2555e-05, 'epoch': 0.76} 05/11/2024 20:34:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.3143, 'learning_rate': 4.2526e-05, 'epoch': 0.76} 05/11/2024 20:35:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1981, 'learning_rate': 4.2497e-05, 'epoch': 0.76} 05/11/2024 20:35:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1042, 'learning_rate': 4.2469e-05, 'epoch': 0.76} 05/11/2024 20:35:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1917, 'learning_rate': 4.2440e-05, 'epoch': 0.76} 05/11/2024 20:35:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2490, 'learning_rate': 4.2411e-05, 'epoch': 0.76} 05/11/2024 20:35:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2334, 'learning_rate': 4.2382e-05, 'epoch': 0.77} 05/11/2024 20:35:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.2110, 'learning_rate': 4.2353e-05, 'epoch': 0.77} 05/11/2024 20:36:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1696, 'learning_rate': 4.2324e-05, 'epoch': 0.77} 05/11/2024 20:36:04 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2500 05/11/2024 20:36:05 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:36:05 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:36:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2500/tokenizer_config.json 05/11/2024 20:36:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2500/special_tokens_map.json 05/11/2024 20:36:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0995, 'learning_rate': 4.2295e-05, 'epoch': 0.77} 05/11/2024 20:36:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2787, 'learning_rate': 4.2266e-05, 'epoch': 0.77} 05/11/2024 20:36:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1928, 'learning_rate': 4.2237e-05, 'epoch': 0.77} 05/11/2024 20:36:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.2198, 'learning_rate': 4.2207e-05, 'epoch': 0.78} 05/11/2024 20:37:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1459, 'learning_rate': 4.2178e-05, 'epoch': 0.78} 05/11/2024 20:37:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2045, 'learning_rate': 4.2149e-05, 'epoch': 0.78} 05/11/2024 20:37:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1892, 'learning_rate': 4.2120e-05, 'epoch': 0.78} 05/11/2024 20:37:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2576, 'learning_rate': 4.2090e-05, 'epoch': 0.78} 05/11/2024 20:37:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.0698, 'learning_rate': 4.2061e-05, 'epoch': 0.78} 05/11/2024 20:37:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1858, 'learning_rate': 4.2031e-05, 'epoch': 0.78} 05/11/2024 20:38:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2202, 'learning_rate': 4.2002e-05, 'epoch': 0.79} 05/11/2024 20:38:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1182, 'learning_rate': 4.1972e-05, 'epoch': 0.79} 05/11/2024 20:38:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1568, 'learning_rate': 4.1943e-05, 'epoch': 0.79} 05/11/2024 20:38:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2269, 'learning_rate': 4.1913e-05, 'epoch': 0.79} 05/11/2024 20:38:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0355, 'learning_rate': 4.1883e-05, 'epoch': 0.79} 05/11/2024 20:38:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2607, 'learning_rate': 4.1854e-05, 'epoch': 0.79} 05/11/2024 20:39:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2368, 'learning_rate': 4.1824e-05, 'epoch': 0.80} 05/11/2024 20:39:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.3141, 'learning_rate': 4.1794e-05, 'epoch': 0.80} 05/11/2024 20:39:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1347, 'learning_rate': 4.1764e-05, 'epoch': 0.80} 05/11/2024 20:39:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.9900, 'learning_rate': 4.1734e-05, 'epoch': 0.80} 05/11/2024 20:39:37 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2600 05/11/2024 20:39:38 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:39:38 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:39:38 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2600/tokenizer_config.json 05/11/2024 20:39:38 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2600/special_tokens_map.json 05/11/2024 20:39:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1714, 'learning_rate': 4.1704e-05, 'epoch': 0.80} 05/11/2024 20:39:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2142, 'learning_rate': 4.1674e-05, 'epoch': 0.80} 05/11/2024 20:40:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1401, 'learning_rate': 4.1644e-05, 'epoch': 0.80} 05/11/2024 20:40:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.2895, 'learning_rate': 4.1614e-05, 'epoch': 0.81} 05/11/2024 20:40:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0513, 'learning_rate': 4.1584e-05, 'epoch': 0.81} 05/11/2024 20:40:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1017, 'learning_rate': 4.1554e-05, 'epoch': 0.81} 05/11/2024 20:40:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2321, 'learning_rate': 4.1524e-05, 'epoch': 0.81} 05/11/2024 20:41:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.3027, 'learning_rate': 4.1493e-05, 'epoch': 0.81} 05/11/2024 20:41:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0455, 'learning_rate': 4.1463e-05, 'epoch': 0.81} 05/11/2024 20:41:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.0785, 'learning_rate': 4.1433e-05, 'epoch': 0.82} 05/11/2024 20:41:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2205, 'learning_rate': 4.1402e-05, 'epoch': 0.82} 05/11/2024 20:41:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0928, 'learning_rate': 4.1372e-05, 'epoch': 0.82} 05/11/2024 20:41:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.0875, 'learning_rate': 4.1342e-05, 'epoch': 0.82} 05/11/2024 20:41:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.3263, 'learning_rate': 4.1311e-05, 'epoch': 0.82} 05/11/2024 20:42:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0747, 'learning_rate': 4.1281e-05, 'epoch': 0.82} 05/11/2024 20:42:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1265, 'learning_rate': 4.1250e-05, 'epoch': 0.82} 05/11/2024 20:42:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1475, 'learning_rate': 4.1219e-05, 'epoch': 0.83} 05/11/2024 20:42:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.2344, 'learning_rate': 4.1189e-05, 'epoch': 0.83} 05/11/2024 20:42:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1101, 'learning_rate': 4.1158e-05, 'epoch': 0.83} 05/11/2024 20:43:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2955, 'learning_rate': 4.1127e-05, 'epoch': 0.83} 05/11/2024 20:43:01 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2700 05/11/2024 20:43:02 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:43:02 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:43:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2700/tokenizer_config.json 05/11/2024 20:43:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2700/special_tokens_map.json 05/11/2024 20:43:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1907, 'learning_rate': 4.1096e-05, 'epoch': 0.83} 05/11/2024 20:43:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1430, 'learning_rate': 4.1066e-05, 'epoch': 0.83} 05/11/2024 20:43:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1143, 'learning_rate': 4.1035e-05, 'epoch': 0.84} 05/11/2024 20:43:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1621, 'learning_rate': 4.1004e-05, 'epoch': 0.84} 05/11/2024 20:43:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1911, 'learning_rate': 4.0973e-05, 'epoch': 0.84} 05/11/2024 20:44:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1710, 'learning_rate': 4.0942e-05, 'epoch': 0.84} 05/11/2024 20:44:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2642, 'learning_rate': 4.0911e-05, 'epoch': 0.84} 05/11/2024 20:44:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2212, 'learning_rate': 4.0880e-05, 'epoch': 0.84} 05/11/2024 20:44:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1457, 'learning_rate': 4.0849e-05, 'epoch': 0.84} 05/11/2024 20:44:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2012, 'learning_rate': 4.0817e-05, 'epoch': 0.85} 05/11/2024 20:44:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2624, 'learning_rate': 4.0786e-05, 'epoch': 0.85} 05/11/2024 20:45:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.0656, 'learning_rate': 4.0755e-05, 'epoch': 0.85} 05/11/2024 20:45:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0405, 'learning_rate': 4.0724e-05, 'epoch': 0.85} 05/11/2024 20:45:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2393, 'learning_rate': 4.0692e-05, 'epoch': 0.85} 05/11/2024 20:45:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2778, 'learning_rate': 4.0661e-05, 'epoch': 0.85} 05/11/2024 20:45:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1223, 'learning_rate': 4.0629e-05, 'epoch': 0.86} 05/11/2024 20:45:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1825, 'learning_rate': 4.0598e-05, 'epoch': 0.86} 05/11/2024 20:46:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1504, 'learning_rate': 4.0567e-05, 'epoch': 0.86} 05/11/2024 20:46:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.2997, 'learning_rate': 4.0535e-05, 'epoch': 0.86} 05/11/2024 20:46:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2010, 'learning_rate': 4.0503e-05, 'epoch': 0.86} 05/11/2024 20:46:30 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2800 05/11/2024 20:46:30 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:46:30 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:46:30 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2800/tokenizer_config.json 05/11/2024 20:46:30 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2800/special_tokens_map.json 05/11/2024 20:46:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0921, 'learning_rate': 4.0472e-05, 'epoch': 0.86} 05/11/2024 20:46:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1598, 'learning_rate': 4.0440e-05, 'epoch': 0.86} 05/11/2024 20:47:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2355, 'learning_rate': 4.0408e-05, 'epoch': 0.87} 05/11/2024 20:47:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1072, 'learning_rate': 4.0377e-05, 'epoch': 0.87} 05/11/2024 20:47:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1840, 'learning_rate': 4.0345e-05, 'epoch': 0.87} 05/11/2024 20:47:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1602, 'learning_rate': 4.0313e-05, 'epoch': 0.87} 05/11/2024 20:47:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1715, 'learning_rate': 4.0281e-05, 'epoch': 0.87} 05/11/2024 20:47:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1610, 'learning_rate': 4.0249e-05, 'epoch': 0.87} 05/11/2024 20:48:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2401, 'learning_rate': 4.0217e-05, 'epoch': 0.88} 05/11/2024 20:48:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2890, 'learning_rate': 4.0185e-05, 'epoch': 0.88} 05/11/2024 20:48:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1013, 'learning_rate': 4.0153e-05, 'epoch': 0.88} 05/11/2024 20:48:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1572, 'learning_rate': 4.0121e-05, 'epoch': 0.88} 05/11/2024 20:48:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1393, 'learning_rate': 4.0089e-05, 'epoch': 0.88} 05/11/2024 20:48:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1285, 'learning_rate': 4.0057e-05, 'epoch': 0.88} 05/11/2024 20:49:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1748, 'learning_rate': 4.0025e-05, 'epoch': 0.88} 05/11/2024 20:49:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1880, 'learning_rate': 3.9993e-05, 'epoch': 0.89} 05/11/2024 20:49:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1547, 'learning_rate': 3.9961e-05, 'epoch': 0.89} 05/11/2024 20:49:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1289, 'learning_rate': 3.9928e-05, 'epoch': 0.89} 05/11/2024 20:49:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2032, 'learning_rate': 3.9896e-05, 'epoch': 0.89} 05/11/2024 20:49:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0806, 'learning_rate': 3.9864e-05, 'epoch': 0.89} 05/11/2024 20:49:54 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2900 05/11/2024 20:49:55 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:49:55 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:49:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2900/tokenizer_config.json 05/11/2024 20:49:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-2900/special_tokens_map.json 05/11/2024 20:50:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1205, 'learning_rate': 3.9831e-05, 'epoch': 0.89} 05/11/2024 20:50:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1769, 'learning_rate': 3.9799e-05, 'epoch': 0.90} 05/11/2024 20:50:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1518, 'learning_rate': 3.9766e-05, 'epoch': 0.90} 05/11/2024 20:50:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1560, 'learning_rate': 3.9734e-05, 'epoch': 0.90} 05/11/2024 20:50:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0314, 'learning_rate': 3.9701e-05, 'epoch': 0.90} 05/11/2024 20:50:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1778, 'learning_rate': 3.9669e-05, 'epoch': 0.90} 05/11/2024 20:51:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1217, 'learning_rate': 3.9636e-05, 'epoch': 0.90} 05/11/2024 20:51:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2169, 'learning_rate': 3.9603e-05, 'epoch': 0.90} 05/11/2024 20:51:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1911, 'learning_rate': 3.9571e-05, 'epoch': 0.91} 05/11/2024 20:51:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1837, 'learning_rate': 3.9538e-05, 'epoch': 0.91} 05/11/2024 20:51:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2342, 'learning_rate': 3.9505e-05, 'epoch': 0.91} 05/11/2024 20:51:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2045, 'learning_rate': 3.9472e-05, 'epoch': 0.91} 05/11/2024 20:52:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2006, 'learning_rate': 3.9439e-05, 'epoch': 0.91} 05/11/2024 20:52:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1462, 'learning_rate': 3.9406e-05, 'epoch': 0.91} 05/11/2024 20:52:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1595, 'learning_rate': 3.9373e-05, 'epoch': 0.92} 05/11/2024 20:52:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1756, 'learning_rate': 3.9341e-05, 'epoch': 0.92} 05/11/2024 20:52:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1852, 'learning_rate': 3.9308e-05, 'epoch': 0.92} 05/11/2024 20:53:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0803, 'learning_rate': 3.9274e-05, 'epoch': 0.92} 05/11/2024 20:53:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2017, 'learning_rate': 3.9241e-05, 'epoch': 0.92} 05/11/2024 20:53:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1179, 'learning_rate': 3.9208e-05, 'epoch': 0.92} 05/11/2024 20:53:23 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3000 05/11/2024 20:53:23 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:53:24 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:53:24 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3000/tokenizer_config.json 05/11/2024 20:53:24 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3000/special_tokens_map.json 05/11/2024 20:53:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0590, 'learning_rate': 3.9175e-05, 'epoch': 0.92} 05/11/2024 20:53:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1931, 'learning_rate': 3.9142e-05, 'epoch': 0.93} 05/11/2024 20:53:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1452, 'learning_rate': 3.9109e-05, 'epoch': 0.93} 05/11/2024 20:54:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0446, 'learning_rate': 3.9075e-05, 'epoch': 0.93} 05/11/2024 20:54:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1965, 'learning_rate': 3.9042e-05, 'epoch': 0.93} 05/11/2024 20:54:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1604, 'learning_rate': 3.9009e-05, 'epoch': 0.93} 05/11/2024 20:54:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1589, 'learning_rate': 3.8975e-05, 'epoch': 0.93} 05/11/2024 20:54:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2621, 'learning_rate': 3.8942e-05, 'epoch': 0.94} 05/11/2024 20:54:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2114, 'learning_rate': 3.8909e-05, 'epoch': 0.94} 05/11/2024 20:55:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1918, 'learning_rate': 3.8875e-05, 'epoch': 0.94} 05/11/2024 20:55:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1405, 'learning_rate': 3.8841e-05, 'epoch': 0.94} 05/11/2024 20:55:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1581, 'learning_rate': 3.8808e-05, 'epoch': 0.94} 05/11/2024 20:55:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1005, 'learning_rate': 3.8774e-05, 'epoch': 0.94} 05/11/2024 20:55:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0558, 'learning_rate': 3.8741e-05, 'epoch': 0.94} 05/11/2024 20:55:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2505, 'learning_rate': 3.8707e-05, 'epoch': 0.95} 05/11/2024 20:56:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2257, 'learning_rate': 3.8673e-05, 'epoch': 0.95} 05/11/2024 20:56:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0496, 'learning_rate': 3.8640e-05, 'epoch': 0.95} 05/11/2024 20:56:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2241, 'learning_rate': 3.8606e-05, 'epoch': 0.95} 05/11/2024 20:56:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.2073, 'learning_rate': 3.8572e-05, 'epoch': 0.95} 05/11/2024 20:56:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2149, 'learning_rate': 3.8538e-05, 'epoch': 0.95} 05/11/2024 20:56:49 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3100 05/11/2024 20:56:50 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 20:56:50 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 20:56:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3100/tokenizer_config.json 05/11/2024 20:56:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3100/special_tokens_map.json 05/11/2024 20:57:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2619, 'learning_rate': 3.8504e-05, 'epoch': 0.96} 05/11/2024 20:57:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1022, 'learning_rate': 3.8470e-05, 'epoch': 0.96} 05/11/2024 20:57:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.3198, 'learning_rate': 3.8436e-05, 'epoch': 0.96} 05/11/2024 20:57:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1009, 'learning_rate': 3.8402e-05, 'epoch': 0.96} 05/11/2024 20:57:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1355, 'learning_rate': 3.8368e-05, 'epoch': 0.96} 05/11/2024 20:57:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1365, 'learning_rate': 3.8334e-05, 'epoch': 0.96} 05/11/2024 20:58:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0481, 'learning_rate': 3.8300e-05, 'epoch': 0.96} 05/11/2024 20:58:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.3200, 'learning_rate': 3.8266e-05, 'epoch': 0.97} 05/11/2024 20:58:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2055, 'learning_rate': 3.8232e-05, 'epoch': 0.97} 05/11/2024 20:58:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1541, 'learning_rate': 3.8198e-05, 'epoch': 0.97} 05/11/2024 20:58:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2130, 'learning_rate': 3.8164e-05, 'epoch': 0.97} 05/11/2024 20:58:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.2212, 'learning_rate': 3.8129e-05, 'epoch': 0.97} 05/11/2024 20:59:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2327, 'learning_rate': 3.8095e-05, 'epoch': 0.97} 05/11/2024 20:59:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1732, 'learning_rate': 3.8061e-05, 'epoch': 0.98} 05/11/2024 20:59:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1486, 'learning_rate': 3.8026e-05, 'epoch': 0.98} 05/11/2024 20:59:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2067, 'learning_rate': 3.7992e-05, 'epoch': 0.98} 05/11/2024 20:59:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1672, 'learning_rate': 3.7958e-05, 'epoch': 0.98} 05/11/2024 20:59:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1711, 'learning_rate': 3.7923e-05, 'epoch': 0.98} 05/11/2024 21:00:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1883, 'learning_rate': 3.7889e-05, 'epoch': 0.98} 05/11/2024 21:00:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0442, 'learning_rate': 3.7854e-05, 'epoch': 0.98} 05/11/2024 21:00:18 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3200 05/11/2024 21:00:18 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:00:18 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:00:18 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3200/tokenizer_config.json 05/11/2024 21:00:18 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3200/special_tokens_map.json 05/11/2024 21:00:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2081, 'learning_rate': 3.7820e-05, 'epoch': 0.99} 05/11/2024 21:00:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2106, 'learning_rate': 3.7785e-05, 'epoch': 0.99} 05/11/2024 21:00:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0911, 'learning_rate': 3.7750e-05, 'epoch': 0.99} 05/11/2024 21:00:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1857, 'learning_rate': 3.7716e-05, 'epoch': 0.99} 05/11/2024 21:01:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2005, 'learning_rate': 3.7681e-05, 'epoch': 0.99} 05/11/2024 21:01:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1781, 'learning_rate': 3.7646e-05, 'epoch': 0.99} 05/11/2024 21:01:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.2937, 'learning_rate': 3.7611e-05, 'epoch': 1.00} 05/11/2024 21:01:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0958, 'learning_rate': 3.7577e-05, 'epoch': 1.00} 05/11/2024 21:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.3530, 'learning_rate': 3.7542e-05, 'epoch': 1.00} 05/11/2024 21:02:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1591, 'learning_rate': 3.7507e-05, 'epoch': 1.00} 05/11/2024 21:02:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1936, 'learning_rate': 3.7472e-05, 'epoch': 1.00} 05/11/2024 21:02:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1801, 'learning_rate': 3.7437e-05, 'epoch': 1.00} 05/11/2024 21:02:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0709, 'learning_rate': 3.7402e-05, 'epoch': 1.00} 05/11/2024 21:02:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2295, 'learning_rate': 3.7367e-05, 'epoch': 1.01} 05/11/2024 21:02:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1981, 'learning_rate': 3.7332e-05, 'epoch': 1.01} 05/11/2024 21:03:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1362, 'learning_rate': 3.7297e-05, 'epoch': 1.01} 05/11/2024 21:03:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0372, 'learning_rate': 3.7262e-05, 'epoch': 1.01} 05/11/2024 21:03:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.0715, 'learning_rate': 3.7227e-05, 'epoch': 1.01} 05/11/2024 21:03:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2701, 'learning_rate': 3.7192e-05, 'epoch': 1.01} 05/11/2024 21:03:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1084, 'learning_rate': 3.7157e-05, 'epoch': 1.02} 05/11/2024 21:03:48 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3300 05/11/2024 21:03:49 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:03:49 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:03:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3300/tokenizer_config.json 05/11/2024 21:03:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3300/special_tokens_map.json 05/11/2024 21:03:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1145, 'learning_rate': 3.7121e-05, 'epoch': 1.02} 05/11/2024 21:04:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.0873, 'learning_rate': 3.7086e-05, 'epoch': 1.02} 05/11/2024 21:04:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1808, 'learning_rate': 3.7051e-05, 'epoch': 1.02} 05/11/2024 21:04:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1542, 'learning_rate': 3.7016e-05, 'epoch': 1.02} 05/11/2024 21:04:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1148, 'learning_rate': 3.6980e-05, 'epoch': 1.02} 05/11/2024 21:04:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1327, 'learning_rate': 3.6945e-05, 'epoch': 1.02} 05/11/2024 21:05:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0530, 'learning_rate': 3.6909e-05, 'epoch': 1.03} 05/11/2024 21:05:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0974, 'learning_rate': 3.6874e-05, 'epoch': 1.03} 05/11/2024 21:05:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2385, 'learning_rate': 3.6839e-05, 'epoch': 1.03} 05/11/2024 21:05:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2269, 'learning_rate': 3.6803e-05, 'epoch': 1.03} 05/11/2024 21:05:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2326, 'learning_rate': 3.6768e-05, 'epoch': 1.03} 05/11/2024 21:05:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0957, 'learning_rate': 3.6732e-05, 'epoch': 1.03} 05/11/2024 21:06:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1362, 'learning_rate': 3.6696e-05, 'epoch': 1.04} 05/11/2024 21:06:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1386, 'learning_rate': 3.6661e-05, 'epoch': 1.04} 05/11/2024 21:06:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.0517, 'learning_rate': 3.6625e-05, 'epoch': 1.04} 05/11/2024 21:06:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1088, 'learning_rate': 3.6590e-05, 'epoch': 1.04} 05/11/2024 21:06:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1799, 'learning_rate': 3.6554e-05, 'epoch': 1.04} 05/11/2024 21:06:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1934, 'learning_rate': 3.6518e-05, 'epoch': 1.04} 05/11/2024 21:07:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1904, 'learning_rate': 3.6482e-05, 'epoch': 1.04} 05/11/2024 21:07:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1015, 'learning_rate': 3.6447e-05, 'epoch': 1.05} 05/11/2024 21:07:18 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3400 05/11/2024 21:07:19 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:07:19 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:07:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3400/tokenizer_config.json 05/11/2024 21:07:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3400/special_tokens_map.json 05/11/2024 21:07:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1911, 'learning_rate': 3.6411e-05, 'epoch': 1.05} 05/11/2024 21:07:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1657, 'learning_rate': 3.6375e-05, 'epoch': 1.05} 05/11/2024 21:07:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1080, 'learning_rate': 3.6339e-05, 'epoch': 1.05} 05/11/2024 21:08:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2603, 'learning_rate': 3.6303e-05, 'epoch': 1.05} 05/11/2024 21:08:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2641, 'learning_rate': 3.6267e-05, 'epoch': 1.05} 05/11/2024 21:08:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2124, 'learning_rate': 3.6231e-05, 'epoch': 1.06} 05/11/2024 21:08:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1966, 'learning_rate': 3.6195e-05, 'epoch': 1.06} 05/11/2024 21:08:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.9945, 'learning_rate': 3.6159e-05, 'epoch': 1.06} 05/11/2024 21:08:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1743, 'learning_rate': 3.6123e-05, 'epoch': 1.06} 05/11/2024 21:09:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0994, 'learning_rate': 3.6087e-05, 'epoch': 1.06} 05/11/2024 21:09:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0864, 'learning_rate': 3.6051e-05, 'epoch': 1.06} 05/11/2024 21:09:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1245, 'learning_rate': 3.6015e-05, 'epoch': 1.06} 05/11/2024 21:09:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1763, 'learning_rate': 3.5979e-05, 'epoch': 1.07} 05/11/2024 21:09:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2429, 'learning_rate': 3.5942e-05, 'epoch': 1.07} 05/11/2024 21:09:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1356, 'learning_rate': 3.5906e-05, 'epoch': 1.07} 05/11/2024 21:10:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2321, 'learning_rate': 3.5870e-05, 'epoch': 1.07} 05/11/2024 21:10:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1777, 'learning_rate': 3.5834e-05, 'epoch': 1.07} 05/11/2024 21:10:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1767, 'learning_rate': 3.5797e-05, 'epoch': 1.07} 05/11/2024 21:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1832, 'learning_rate': 3.5761e-05, 'epoch': 1.08} 05/11/2024 21:10:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0945, 'learning_rate': 3.5725e-05, 'epoch': 1.08} 05/11/2024 21:10:47 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3500 05/11/2024 21:10:48 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:10:48 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:10:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3500/tokenizer_config.json 05/11/2024 21:10:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3500/special_tokens_map.json 05/11/2024 21:10:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1804, 'learning_rate': 3.5688e-05, 'epoch': 1.08} 05/11/2024 21:11:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1204, 'learning_rate': 3.5652e-05, 'epoch': 1.08} 05/11/2024 21:11:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.0483, 'learning_rate': 3.5615e-05, 'epoch': 1.08} 05/11/2024 21:11:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1178, 'learning_rate': 3.5579e-05, 'epoch': 1.08} 05/11/2024 21:11:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1746, 'learning_rate': 3.5542e-05, 'epoch': 1.08} 05/11/2024 21:11:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0747, 'learning_rate': 3.5506e-05, 'epoch': 1.09} 05/11/2024 21:12:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1877, 'learning_rate': 3.5469e-05, 'epoch': 1.09} 05/11/2024 21:12:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2445, 'learning_rate': 3.5433e-05, 'epoch': 1.09} 05/11/2024 21:12:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2454, 'learning_rate': 3.5396e-05, 'epoch': 1.09} 05/11/2024 21:12:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2255, 'learning_rate': 3.5359e-05, 'epoch': 1.09} 05/11/2024 21:12:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1488, 'learning_rate': 3.5323e-05, 'epoch': 1.09} 05/11/2024 21:12:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1498, 'learning_rate': 3.5286e-05, 'epoch': 1.10} 05/11/2024 21:13:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2605, 'learning_rate': 3.5249e-05, 'epoch': 1.10} 05/11/2024 21:13:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1793, 'learning_rate': 3.5213e-05, 'epoch': 1.10} 05/11/2024 21:13:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2756, 'learning_rate': 3.5176e-05, 'epoch': 1.10} 05/11/2024 21:13:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1377, 'learning_rate': 3.5139e-05, 'epoch': 1.10} 05/11/2024 21:13:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0172, 'learning_rate': 3.5102e-05, 'epoch': 1.10} 05/11/2024 21:13:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2052, 'learning_rate': 3.5065e-05, 'epoch': 1.10} 05/11/2024 21:14:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2435, 'learning_rate': 3.5028e-05, 'epoch': 1.11} 05/11/2024 21:14:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0600, 'learning_rate': 3.4991e-05, 'epoch': 1.11} 05/11/2024 21:14:18 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3600 05/11/2024 21:14:19 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:14:19 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:14:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3600/tokenizer_config.json 05/11/2024 21:14:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3600/special_tokens_map.json 05/11/2024 21:14:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0506, 'learning_rate': 3.4955e-05, 'epoch': 1.11} 05/11/2024 21:14:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0990, 'learning_rate': 3.4918e-05, 'epoch': 1.11} 05/11/2024 21:14:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1987, 'learning_rate': 3.4881e-05, 'epoch': 1.11} 05/11/2024 21:15:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1429, 'learning_rate': 3.4844e-05, 'epoch': 1.11} 05/11/2024 21:15:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1281, 'learning_rate': 3.4807e-05, 'epoch': 1.12} 05/11/2024 21:15:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1641, 'learning_rate': 3.4770e-05, 'epoch': 1.12} 05/11/2024 21:15:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.2067, 'learning_rate': 3.4732e-05, 'epoch': 1.12} 05/11/2024 21:15:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1588, 'learning_rate': 3.4695e-05, 'epoch': 1.12} 05/11/2024 21:15:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1281, 'learning_rate': 3.4658e-05, 'epoch': 1.12} 05/11/2024 21:16:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2701, 'learning_rate': 3.4621e-05, 'epoch': 1.12} 05/11/2024 21:16:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2230, 'learning_rate': 3.4584e-05, 'epoch': 1.12} 05/11/2024 21:16:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1068, 'learning_rate': 3.4547e-05, 'epoch': 1.13} 05/11/2024 21:16:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.2160, 'learning_rate': 3.4509e-05, 'epoch': 1.13} 05/11/2024 21:16:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1304, 'learning_rate': 3.4472e-05, 'epoch': 1.13} 05/11/2024 21:16:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1709, 'learning_rate': 3.4435e-05, 'epoch': 1.13} 05/11/2024 21:17:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2391, 'learning_rate': 3.4398e-05, 'epoch': 1.13} 05/11/2024 21:17:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1445, 'learning_rate': 3.4360e-05, 'epoch': 1.13} 05/11/2024 21:17:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1943, 'learning_rate': 3.4323e-05, 'epoch': 1.14} 05/11/2024 21:17:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2019, 'learning_rate': 3.4285e-05, 'epoch': 1.14} 05/11/2024 21:17:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1219, 'learning_rate': 3.4248e-05, 'epoch': 1.14} 05/11/2024 21:17:48 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3700 05/11/2024 21:17:49 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:17:49 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:17:49 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3700/tokenizer_config.json 05/11/2024 21:17:49 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3700/special_tokens_map.json 05/11/2024 21:18:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2591, 'learning_rate': 3.4211e-05, 'epoch': 1.14} 05/11/2024 21:18:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1153, 'learning_rate': 3.4173e-05, 'epoch': 1.14} 05/11/2024 21:18:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.2842, 'learning_rate': 3.4136e-05, 'epoch': 1.14} 05/11/2024 21:18:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1178, 'learning_rate': 3.4098e-05, 'epoch': 1.14} 05/11/2024 21:18:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1444, 'learning_rate': 3.4061e-05, 'epoch': 1.15} 05/11/2024 21:18:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1948, 'learning_rate': 3.4023e-05, 'epoch': 1.15} 05/11/2024 21:19:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.0960, 'learning_rate': 3.3986e-05, 'epoch': 1.15} 05/11/2024 21:19:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2706, 'learning_rate': 3.3948e-05, 'epoch': 1.15} 05/11/2024 21:19:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1966, 'learning_rate': 3.3910e-05, 'epoch': 1.15} 05/11/2024 21:19:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1628, 'learning_rate': 3.3873e-05, 'epoch': 1.15} 05/11/2024 21:19:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1630, 'learning_rate': 3.3835e-05, 'epoch': 1.16} 05/11/2024 21:19:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1373, 'learning_rate': 3.3797e-05, 'epoch': 1.16} 05/11/2024 21:20:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1696, 'learning_rate': 3.3760e-05, 'epoch': 1.16} 05/11/2024 21:20:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.0845, 'learning_rate': 3.3722e-05, 'epoch': 1.16} 05/11/2024 21:20:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1714, 'learning_rate': 3.3684e-05, 'epoch': 1.16} 05/11/2024 21:20:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1125, 'learning_rate': 3.3646e-05, 'epoch': 1.16} 05/11/2024 21:20:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2847, 'learning_rate': 3.3609e-05, 'epoch': 1.16} 05/11/2024 21:20:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1486, 'learning_rate': 3.3571e-05, 'epoch': 1.17} 05/11/2024 21:21:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1909, 'learning_rate': 3.3533e-05, 'epoch': 1.17} 05/11/2024 21:21:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1553, 'learning_rate': 3.3495e-05, 'epoch': 1.17} 05/11/2024 21:21:17 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3800 05/11/2024 21:21:18 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:21:18 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:21:18 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3800/tokenizer_config.json 05/11/2024 21:21:18 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3800/special_tokens_map.json 05/11/2024 21:21:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2083, 'learning_rate': 3.3457e-05, 'epoch': 1.17} 05/11/2024 21:21:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1371, 'learning_rate': 3.3419e-05, 'epoch': 1.17} 05/11/2024 21:21:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1377, 'learning_rate': 3.3381e-05, 'epoch': 1.17} 05/11/2024 21:22:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1152, 'learning_rate': 3.3343e-05, 'epoch': 1.18} 05/11/2024 21:22:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1877, 'learning_rate': 3.3305e-05, 'epoch': 1.18} 05/11/2024 21:22:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1014, 'learning_rate': 3.3267e-05, 'epoch': 1.18} 05/11/2024 21:22:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2185, 'learning_rate': 3.3229e-05, 'epoch': 1.18} 05/11/2024 21:22:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1972, 'learning_rate': 3.3191e-05, 'epoch': 1.18} 05/11/2024 21:22:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1984, 'learning_rate': 3.3153e-05, 'epoch': 1.18} 05/11/2024 21:23:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1742, 'learning_rate': 3.3115e-05, 'epoch': 1.18} 05/11/2024 21:23:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1949, 'learning_rate': 3.3077e-05, 'epoch': 1.19} 05/11/2024 21:23:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1302, 'learning_rate': 3.3039e-05, 'epoch': 1.19} 05/11/2024 21:23:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1606, 'learning_rate': 3.3001e-05, 'epoch': 1.19} 05/11/2024 21:23:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2142, 'learning_rate': 3.2963e-05, 'epoch': 1.19} 05/11/2024 21:23:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2005, 'learning_rate': 3.2924e-05, 'epoch': 1.19} 05/11/2024 21:24:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1488, 'learning_rate': 3.2886e-05, 'epoch': 1.19} 05/11/2024 21:24:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1306, 'learning_rate': 3.2848e-05, 'epoch': 1.20} 05/11/2024 21:24:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2155, 'learning_rate': 3.2810e-05, 'epoch': 1.20} 05/11/2024 21:24:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1306, 'learning_rate': 3.2771e-05, 'epoch': 1.20} 05/11/2024 21:24:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1581, 'learning_rate': 3.2733e-05, 'epoch': 1.20} 05/11/2024 21:24:52 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3900 05/11/2024 21:24:52 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:24:52 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:24:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3900/tokenizer_config.json 05/11/2024 21:24:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-3900/special_tokens_map.json 05/11/2024 21:25:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2380, 'learning_rate': 3.2695e-05, 'epoch': 1.20} 05/11/2024 21:25:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2139, 'learning_rate': 3.2656e-05, 'epoch': 1.20} 05/11/2024 21:25:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.9819, 'learning_rate': 3.2618e-05, 'epoch': 1.20} 05/11/2024 21:25:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1440, 'learning_rate': 3.2580e-05, 'epoch': 1.21} 05/11/2024 21:25:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2498, 'learning_rate': 3.2541e-05, 'epoch': 1.21} 05/11/2024 21:25:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1648, 'learning_rate': 3.2503e-05, 'epoch': 1.21} 05/11/2024 21:26:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2591, 'learning_rate': 3.2464e-05, 'epoch': 1.21} 05/11/2024 21:26:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2592, 'learning_rate': 3.2426e-05, 'epoch': 1.21} 05/11/2024 21:26:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2349, 'learning_rate': 3.2388e-05, 'epoch': 1.21} 05/11/2024 21:26:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1056, 'learning_rate': 3.2349e-05, 'epoch': 1.22} 05/11/2024 21:26:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1998, 'learning_rate': 3.2311e-05, 'epoch': 1.22} 05/11/2024 21:26:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1915, 'learning_rate': 3.2272e-05, 'epoch': 1.22} 05/11/2024 21:27:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1046, 'learning_rate': 3.2234e-05, 'epoch': 1.22} 05/11/2024 21:27:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1206, 'learning_rate': 3.2195e-05, 'epoch': 1.22} 05/11/2024 21:27:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1120, 'learning_rate': 3.2156e-05, 'epoch': 1.22} 05/11/2024 21:27:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1660, 'learning_rate': 3.2118e-05, 'epoch': 1.22} 05/11/2024 21:27:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2695, 'learning_rate': 3.2079e-05, 'epoch': 1.23} 05/11/2024 21:28:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1158, 'learning_rate': 3.2041e-05, 'epoch': 1.23} 05/11/2024 21:28:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1164, 'learning_rate': 3.2002e-05, 'epoch': 1.23} 05/11/2024 21:28:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1607, 'learning_rate': 3.1963e-05, 'epoch': 1.23} 05/11/2024 21:28:21 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4000 05/11/2024 21:28:22 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:28:22 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:28:22 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4000/tokenizer_config.json 05/11/2024 21:28:22 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4000/special_tokens_map.json 05/11/2024 21:28:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1518, 'learning_rate': 3.1924e-05, 'epoch': 1.23} 05/11/2024 21:28:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0802, 'learning_rate': 3.1886e-05, 'epoch': 1.23} 05/11/2024 21:28:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1637, 'learning_rate': 3.1847e-05, 'epoch': 1.24} 05/11/2024 21:29:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1652, 'learning_rate': 3.1808e-05, 'epoch': 1.24} 05/11/2024 21:29:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1713, 'learning_rate': 3.1770e-05, 'epoch': 1.24} 05/11/2024 21:29:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1362, 'learning_rate': 3.1731e-05, 'epoch': 1.24} 05/11/2024 21:29:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1525, 'learning_rate': 3.1692e-05, 'epoch': 1.24} 05/11/2024 21:29:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1085, 'learning_rate': 3.1653e-05, 'epoch': 1.24} 05/11/2024 21:29:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2047, 'learning_rate': 3.1614e-05, 'epoch': 1.24} 05/11/2024 21:30:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1670, 'learning_rate': 3.1575e-05, 'epoch': 1.25} 05/11/2024 21:30:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0551, 'learning_rate': 3.1537e-05, 'epoch': 1.25} 05/11/2024 21:30:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.0985, 'learning_rate': 3.1498e-05, 'epoch': 1.25} 05/11/2024 21:30:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2302, 'learning_rate': 3.1459e-05, 'epoch': 1.25} 05/11/2024 21:30:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1658, 'learning_rate': 3.1420e-05, 'epoch': 1.25} 05/11/2024 21:30:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1095, 'learning_rate': 3.1381e-05, 'epoch': 1.25} 05/11/2024 21:31:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2342, 'learning_rate': 3.1342e-05, 'epoch': 1.26} 05/11/2024 21:31:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0888, 'learning_rate': 3.1303e-05, 'epoch': 1.26} 05/11/2024 21:31:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.0256, 'learning_rate': 3.1264e-05, 'epoch': 1.26} 05/11/2024 21:31:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1953, 'learning_rate': 3.1225e-05, 'epoch': 1.26} 05/11/2024 21:31:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.2162, 'learning_rate': 3.1186e-05, 'epoch': 1.26} 05/11/2024 21:31:47 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4100 05/11/2024 21:31:48 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:31:48 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:31:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4100/tokenizer_config.json 05/11/2024 21:31:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4100/special_tokens_map.json 05/11/2024 21:31:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0836, 'learning_rate': 3.1147e-05, 'epoch': 1.26} 05/11/2024 21:32:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1453, 'learning_rate': 3.1108e-05, 'epoch': 1.26} 05/11/2024 21:32:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1972, 'learning_rate': 3.1069e-05, 'epoch': 1.27} 05/11/2024 21:32:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0966, 'learning_rate': 3.1030e-05, 'epoch': 1.27} 05/11/2024 21:32:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1734, 'learning_rate': 3.0991e-05, 'epoch': 1.27} 05/11/2024 21:32:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2381, 'learning_rate': 3.0952e-05, 'epoch': 1.27} 05/11/2024 21:33:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.0485, 'learning_rate': 3.0912e-05, 'epoch': 1.27} 05/11/2024 21:33:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.0804, 'learning_rate': 3.0873e-05, 'epoch': 1.27} 05/11/2024 21:33:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.0904, 'learning_rate': 3.0834e-05, 'epoch': 1.28} 05/11/2024 21:33:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1033, 'learning_rate': 3.0795e-05, 'epoch': 1.28} 05/11/2024 21:33:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0441, 'learning_rate': 3.0756e-05, 'epoch': 1.28} 05/11/2024 21:33:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2350, 'learning_rate': 3.0717e-05, 'epoch': 1.28} 05/11/2024 21:34:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1928, 'learning_rate': 3.0677e-05, 'epoch': 1.28} 05/11/2024 21:34:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.0306, 'learning_rate': 3.0638e-05, 'epoch': 1.28} 05/11/2024 21:34:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0907, 'learning_rate': 3.0599e-05, 'epoch': 1.28} 05/11/2024 21:34:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1372, 'learning_rate': 3.0560e-05, 'epoch': 1.29} 05/11/2024 21:34:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1397, 'learning_rate': 3.0520e-05, 'epoch': 1.29} 05/11/2024 21:34:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1000, 'learning_rate': 3.0481e-05, 'epoch': 1.29} 05/11/2024 21:35:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1479, 'learning_rate': 3.0442e-05, 'epoch': 1.29} 05/11/2024 21:35:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1148, 'learning_rate': 3.0402e-05, 'epoch': 1.29} 05/11/2024 21:35:13 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4200 05/11/2024 21:35:14 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:35:14 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:35:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4200/tokenizer_config.json 05/11/2024 21:35:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4200/special_tokens_map.json 05/11/2024 21:35:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0205, 'learning_rate': 3.0363e-05, 'epoch': 1.29} 05/11/2024 21:35:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1862, 'learning_rate': 3.0324e-05, 'epoch': 1.30} 05/11/2024 21:35:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0884, 'learning_rate': 3.0284e-05, 'epoch': 1.30} 05/11/2024 21:35:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2023, 'learning_rate': 3.0245e-05, 'epoch': 1.30} 05/11/2024 21:36:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1609, 'learning_rate': 3.0206e-05, 'epoch': 1.30} 05/11/2024 21:36:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0451, 'learning_rate': 3.0166e-05, 'epoch': 1.30} 05/11/2024 21:36:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1886, 'learning_rate': 3.0127e-05, 'epoch': 1.30} 05/11/2024 21:36:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1681, 'learning_rate': 3.0087e-05, 'epoch': 1.30} 05/11/2024 21:36:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1769, 'learning_rate': 3.0048e-05, 'epoch': 1.31} 05/11/2024 21:37:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.0251, 'learning_rate': 3.0009e-05, 'epoch': 1.31} 05/11/2024 21:37:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2783, 'learning_rate': 2.9969e-05, 'epoch': 1.31} 05/11/2024 21:37:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.0925, 'learning_rate': 2.9930e-05, 'epoch': 1.31} 05/11/2024 21:37:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1350, 'learning_rate': 2.9890e-05, 'epoch': 1.31} 05/11/2024 21:37:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1819, 'learning_rate': 2.9851e-05, 'epoch': 1.31} 05/11/2024 21:37:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1460, 'learning_rate': 2.9811e-05, 'epoch': 1.32} 05/11/2024 21:38:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.0705, 'learning_rate': 2.9772e-05, 'epoch': 1.32} 05/11/2024 21:38:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1656, 'learning_rate': 2.9732e-05, 'epoch': 1.32} 05/11/2024 21:38:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1809, 'learning_rate': 2.9692e-05, 'epoch': 1.32} 05/11/2024 21:38:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0016, 'learning_rate': 2.9653e-05, 'epoch': 1.32} 05/11/2024 21:38:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2292, 'learning_rate': 2.9613e-05, 'epoch': 1.32} 05/11/2024 21:38:44 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4300 05/11/2024 21:38:45 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:38:45 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:38:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4300/tokenizer_config.json 05/11/2024 21:38:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4300/special_tokens_map.json 05/11/2024 21:38:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1243, 'learning_rate': 2.9574e-05, 'epoch': 1.32} 05/11/2024 21:39:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2089, 'learning_rate': 2.9534e-05, 'epoch': 1.33} 05/11/2024 21:39:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0332, 'learning_rate': 2.9494e-05, 'epoch': 1.33} 05/11/2024 21:39:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1536, 'learning_rate': 2.9455e-05, 'epoch': 1.33} 05/11/2024 21:39:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2825, 'learning_rate': 2.9415e-05, 'epoch': 1.33} 05/11/2024 21:39:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.0036, 'learning_rate': 2.9376e-05, 'epoch': 1.33} 05/11/2024 21:39:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0644, 'learning_rate': 2.9336e-05, 'epoch': 1.33} 05/11/2024 21:40:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0763, 'learning_rate': 2.9296e-05, 'epoch': 1.34} 05/11/2024 21:40:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.2327, 'learning_rate': 2.9257e-05, 'epoch': 1.34} 05/11/2024 21:40:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1097, 'learning_rate': 2.9217e-05, 'epoch': 1.34} 05/11/2024 21:40:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1278, 'learning_rate': 2.9177e-05, 'epoch': 1.34} 05/11/2024 21:40:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0971, 'learning_rate': 2.9137e-05, 'epoch': 1.34} 05/11/2024 21:40:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1223, 'learning_rate': 2.9098e-05, 'epoch': 1.34} 05/11/2024 21:41:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1469, 'learning_rate': 2.9058e-05, 'epoch': 1.34} 05/11/2024 21:41:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1358, 'learning_rate': 2.9018e-05, 'epoch': 1.35} 05/11/2024 21:41:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1122, 'learning_rate': 2.8978e-05, 'epoch': 1.35} 05/11/2024 21:41:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1214, 'learning_rate': 2.8939e-05, 'epoch': 1.35} 05/11/2024 21:41:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2561, 'learning_rate': 2.8899e-05, 'epoch': 1.35} 05/11/2024 21:41:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1250, 'learning_rate': 2.8859e-05, 'epoch': 1.35} 05/11/2024 21:42:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1561, 'learning_rate': 2.8819e-05, 'epoch': 1.35} 05/11/2024 21:42:09 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4400 05/11/2024 21:42:10 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:42:10 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:42:10 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4400/tokenizer_config.json 05/11/2024 21:42:10 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4400/special_tokens_map.json 05/11/2024 21:42:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1290, 'learning_rate': 2.8780e-05, 'epoch': 1.36} 05/11/2024 21:42:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1403, 'learning_rate': 2.8740e-05, 'epoch': 1.36} 05/11/2024 21:42:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0386, 'learning_rate': 2.8700e-05, 'epoch': 1.36} 05/11/2024 21:42:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2134, 'learning_rate': 2.8660e-05, 'epoch': 1.36} 05/11/2024 21:43:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1874, 'learning_rate': 2.8620e-05, 'epoch': 1.36} 05/11/2024 21:43:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1967, 'learning_rate': 2.8580e-05, 'epoch': 1.36} 05/11/2024 21:43:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0966, 'learning_rate': 2.8540e-05, 'epoch': 1.36} 05/11/2024 21:43:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0942, 'learning_rate': 2.8501e-05, 'epoch': 1.37} 05/11/2024 21:43:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.2002, 'learning_rate': 2.8461e-05, 'epoch': 1.37} 05/11/2024 21:43:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0063, 'learning_rate': 2.8421e-05, 'epoch': 1.37} 05/11/2024 21:44:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1806, 'learning_rate': 2.8381e-05, 'epoch': 1.37} 05/11/2024 21:44:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0910, 'learning_rate': 2.8341e-05, 'epoch': 1.37} 05/11/2024 21:44:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1485, 'learning_rate': 2.8301e-05, 'epoch': 1.37} 05/11/2024 21:44:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1090, 'learning_rate': 2.8261e-05, 'epoch': 1.38} 05/11/2024 21:44:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1762, 'learning_rate': 2.8221e-05, 'epoch': 1.38} 05/11/2024 21:44:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0974, 'learning_rate': 2.8181e-05, 'epoch': 1.38} 05/11/2024 21:45:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0408, 'learning_rate': 2.8141e-05, 'epoch': 1.38} 05/11/2024 21:45:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1508, 'learning_rate': 2.8101e-05, 'epoch': 1.38} 05/11/2024 21:45:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1400, 'learning_rate': 2.8061e-05, 'epoch': 1.38} 05/11/2024 21:45:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.2640, 'learning_rate': 2.8021e-05, 'epoch': 1.38} 05/11/2024 21:45:39 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4500 05/11/2024 21:45:40 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:45:40 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:45:40 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4500/tokenizer_config.json 05/11/2024 21:45:40 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4500/special_tokens_map.json 05/11/2024 21:45:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.0759, 'learning_rate': 2.7981e-05, 'epoch': 1.39} 05/11/2024 21:45:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.0819, 'learning_rate': 2.7941e-05, 'epoch': 1.39} 05/11/2024 21:46:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1767, 'learning_rate': 2.7901e-05, 'epoch': 1.39} 05/11/2024 21:46:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.2153, 'learning_rate': 2.7861e-05, 'epoch': 1.39} 05/11/2024 21:46:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.9841, 'learning_rate': 2.7821e-05, 'epoch': 1.39} 05/11/2024 21:46:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1419, 'learning_rate': 2.7781e-05, 'epoch': 1.39} 05/11/2024 21:46:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0705, 'learning_rate': 2.7741e-05, 'epoch': 1.40} 05/11/2024 21:47:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2047, 'learning_rate': 2.7701e-05, 'epoch': 1.40} 05/11/2024 21:47:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2228, 'learning_rate': 2.7661e-05, 'epoch': 1.40} 05/11/2024 21:47:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2255, 'learning_rate': 2.7621e-05, 'epoch': 1.40} 05/11/2024 21:47:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1794, 'learning_rate': 2.7581e-05, 'epoch': 1.40} 05/11/2024 21:47:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1865, 'learning_rate': 2.7541e-05, 'epoch': 1.40} 05/11/2024 21:47:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0968, 'learning_rate': 2.7501e-05, 'epoch': 1.40} 05/11/2024 21:48:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1228, 'learning_rate': 2.7461e-05, 'epoch': 1.41} 05/11/2024 21:48:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1827, 'learning_rate': 2.7421e-05, 'epoch': 1.41} 05/11/2024 21:48:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1437, 'learning_rate': 2.7381e-05, 'epoch': 1.41} 05/11/2024 21:48:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2372, 'learning_rate': 2.7341e-05, 'epoch': 1.41} 05/11/2024 21:48:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1694, 'learning_rate': 2.7301e-05, 'epoch': 1.41} 05/11/2024 21:48:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2479, 'learning_rate': 2.7260e-05, 'epoch': 1.41} 05/11/2024 21:49:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1552, 'learning_rate': 2.7220e-05, 'epoch': 1.42} 05/11/2024 21:49:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4600 05/11/2024 21:49:07 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:49:07 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:49:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4600/tokenizer_config.json 05/11/2024 21:49:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4600/special_tokens_map.json 05/11/2024 21:49:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1731, 'learning_rate': 2.7180e-05, 'epoch': 1.42} 05/11/2024 21:49:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2213, 'learning_rate': 2.7140e-05, 'epoch': 1.42} 05/11/2024 21:49:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1097, 'learning_rate': 2.7100e-05, 'epoch': 1.42} 05/11/2024 21:49:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.0566, 'learning_rate': 2.7060e-05, 'epoch': 1.42} 05/11/2024 21:49:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1266, 'learning_rate': 2.7020e-05, 'epoch': 1.42} 05/11/2024 21:50:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1205, 'learning_rate': 2.6980e-05, 'epoch': 1.42} 05/11/2024 21:50:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1683, 'learning_rate': 2.6939e-05, 'epoch': 1.43} 05/11/2024 21:50:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1016, 'learning_rate': 2.6899e-05, 'epoch': 1.43} 05/11/2024 21:50:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1832, 'learning_rate': 2.6859e-05, 'epoch': 1.43} 05/11/2024 21:50:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1577, 'learning_rate': 2.6819e-05, 'epoch': 1.43} 05/11/2024 21:51:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2002, 'learning_rate': 2.6779e-05, 'epoch': 1.43} 05/11/2024 21:51:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1888, 'learning_rate': 2.6739e-05, 'epoch': 1.43} 05/11/2024 21:51:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.0284, 'learning_rate': 2.6698e-05, 'epoch': 1.44} 05/11/2024 21:51:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1301, 'learning_rate': 2.6658e-05, 'epoch': 1.44} 05/11/2024 21:51:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1265, 'learning_rate': 2.6618e-05, 'epoch': 1.44} 05/11/2024 21:51:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1718, 'learning_rate': 2.6578e-05, 'epoch': 1.44} 05/11/2024 21:52:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1919, 'learning_rate': 2.6538e-05, 'epoch': 1.44} 05/11/2024 21:52:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1858, 'learning_rate': 2.6497e-05, 'epoch': 1.44} 05/11/2024 21:52:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2185, 'learning_rate': 2.6457e-05, 'epoch': 1.44} 05/11/2024 21:52:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0986, 'learning_rate': 2.6417e-05, 'epoch': 1.45} 05/11/2024 21:52:34 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4700 05/11/2024 21:52:35 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:52:35 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:52:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4700/tokenizer_config.json 05/11/2024 21:52:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4700/special_tokens_map.json 05/11/2024 21:52:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2856, 'learning_rate': 2.6377e-05, 'epoch': 1.45} 05/11/2024 21:52:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2691, 'learning_rate': 2.6337e-05, 'epoch': 1.45} 05/11/2024 21:53:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1015, 'learning_rate': 2.6296e-05, 'epoch': 1.45} 05/11/2024 21:53:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1832, 'learning_rate': 2.6256e-05, 'epoch': 1.45} 05/11/2024 21:53:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2232, 'learning_rate': 2.6216e-05, 'epoch': 1.45} 05/11/2024 21:53:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0803, 'learning_rate': 2.6176e-05, 'epoch': 1.46} 05/11/2024 21:53:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1727, 'learning_rate': 2.6135e-05, 'epoch': 1.46} 05/11/2024 21:53:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2126, 'learning_rate': 2.6095e-05, 'epoch': 1.46} 05/11/2024 21:54:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1546, 'learning_rate': 2.6055e-05, 'epoch': 1.46} 05/11/2024 21:54:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0129, 'learning_rate': 2.6015e-05, 'epoch': 1.46} 05/11/2024 21:54:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2372, 'learning_rate': 2.5974e-05, 'epoch': 1.46} 05/11/2024 21:54:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.0657, 'learning_rate': 2.5934e-05, 'epoch': 1.46} 05/11/2024 21:54:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2039, 'learning_rate': 2.5894e-05, 'epoch': 1.47} 05/11/2024 21:54:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1466, 'learning_rate': 2.5854e-05, 'epoch': 1.47} 05/11/2024 21:55:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1439, 'learning_rate': 2.5813e-05, 'epoch': 1.47} 05/11/2024 21:55:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1856, 'learning_rate': 2.5773e-05, 'epoch': 1.47} 05/11/2024 21:55:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1144, 'learning_rate': 2.5733e-05, 'epoch': 1.47} 05/11/2024 21:55:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1321, 'learning_rate': 2.5693e-05, 'epoch': 1.47} 05/11/2024 21:55:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1729, 'learning_rate': 2.5652e-05, 'epoch': 1.48} 05/11/2024 21:55:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0952, 'learning_rate': 2.5612e-05, 'epoch': 1.48} 05/11/2024 21:55:58 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4800 05/11/2024 21:55:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:55:59 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:55:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4800/tokenizer_config.json 05/11/2024 21:55:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4800/special_tokens_map.json 05/11/2024 21:56:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0512, 'learning_rate': 2.5572e-05, 'epoch': 1.48} 05/11/2024 21:56:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0594, 'learning_rate': 2.5532e-05, 'epoch': 1.48} 05/11/2024 21:56:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2463, 'learning_rate': 2.5491e-05, 'epoch': 1.48} 05/11/2024 21:56:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1274, 'learning_rate': 2.5451e-05, 'epoch': 1.48} 05/11/2024 21:56:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.0678, 'learning_rate': 2.5411e-05, 'epoch': 1.48} 05/11/2024 21:56:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.2476, 'learning_rate': 2.5371e-05, 'epoch': 1.49} 05/11/2024 21:57:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1183, 'learning_rate': 2.5330e-05, 'epoch': 1.49} 05/11/2024 21:57:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1317, 'learning_rate': 2.5290e-05, 'epoch': 1.49} 05/11/2024 21:57:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2063, 'learning_rate': 2.5250e-05, 'epoch': 1.49} 05/11/2024 21:57:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1008, 'learning_rate': 2.5209e-05, 'epoch': 1.49} 05/11/2024 21:57:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1237, 'learning_rate': 2.5169e-05, 'epoch': 1.49} 05/11/2024 21:58:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2420, 'learning_rate': 2.5129e-05, 'epoch': 1.50} 05/11/2024 21:58:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1619, 'learning_rate': 2.5089e-05, 'epoch': 1.50} 05/11/2024 21:58:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2496, 'learning_rate': 2.5048e-05, 'epoch': 1.50} 05/11/2024 21:58:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1321, 'learning_rate': 2.5008e-05, 'epoch': 1.50} 05/11/2024 21:58:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1335, 'learning_rate': 2.4968e-05, 'epoch': 1.50} 05/11/2024 21:58:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1601, 'learning_rate': 2.4928e-05, 'epoch': 1.50} 05/11/2024 21:59:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1363, 'learning_rate': 2.4887e-05, 'epoch': 1.50} 05/11/2024 21:59:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.2231, 'learning_rate': 2.4847e-05, 'epoch': 1.51} 05/11/2024 21:59:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2626, 'learning_rate': 2.4807e-05, 'epoch': 1.51} 05/11/2024 21:59:25 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4900 05/11/2024 21:59:26 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 21:59:26 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 21:59:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4900/tokenizer_config.json 05/11/2024 21:59:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-4900/special_tokens_map.json 05/11/2024 21:59:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2636, 'learning_rate': 2.4766e-05, 'epoch': 1.51} 05/11/2024 21:59:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2340, 'learning_rate': 2.4726e-05, 'epoch': 1.51} 05/11/2024 21:59:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1924, 'learning_rate': 2.4686e-05, 'epoch': 1.51} 05/11/2024 22:00:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2864, 'learning_rate': 2.4646e-05, 'epoch': 1.51} 05/11/2024 22:00:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1517, 'learning_rate': 2.4605e-05, 'epoch': 1.52} 05/11/2024 22:00:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2202, 'learning_rate': 2.4565e-05, 'epoch': 1.52} 05/11/2024 22:00:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2323, 'learning_rate': 2.4525e-05, 'epoch': 1.52} 05/11/2024 22:00:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.2025, 'learning_rate': 2.4484e-05, 'epoch': 1.52} 05/11/2024 22:01:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1265, 'learning_rate': 2.4444e-05, 'epoch': 1.52} 05/11/2024 22:01:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0759, 'learning_rate': 2.4404e-05, 'epoch': 1.52} 05/11/2024 22:01:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0696, 'learning_rate': 2.4364e-05, 'epoch': 1.52} 05/11/2024 22:01:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0459, 'learning_rate': 2.4323e-05, 'epoch': 1.53} 05/11/2024 22:01:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2534, 'learning_rate': 2.4283e-05, 'epoch': 1.53} 05/11/2024 22:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.2542, 'learning_rate': 2.4243e-05, 'epoch': 1.53} 05/11/2024 22:02:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0870, 'learning_rate': 2.4203e-05, 'epoch': 1.53} 05/11/2024 22:02:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2709, 'learning_rate': 2.4162e-05, 'epoch': 1.53} 05/11/2024 22:02:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0422, 'learning_rate': 2.4122e-05, 'epoch': 1.53} 05/11/2024 22:02:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1019, 'learning_rate': 2.4082e-05, 'epoch': 1.54} 05/11/2024 22:02:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1101, 'learning_rate': 2.4042e-05, 'epoch': 1.54} 05/11/2024 22:02:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1262, 'learning_rate': 2.4001e-05, 'epoch': 1.54} 05/11/2024 22:02:54 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5000 05/11/2024 22:02:55 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:02:55 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:02:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5000/tokenizer_config.json 05/11/2024 22:02:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5000/special_tokens_map.json 05/11/2024 22:03:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2232, 'learning_rate': 2.3961e-05, 'epoch': 1.54} 05/11/2024 22:03:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1251, 'learning_rate': 2.3921e-05, 'epoch': 1.54} 05/11/2024 22:03:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1748, 'learning_rate': 2.3881e-05, 'epoch': 1.54} 05/11/2024 22:03:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1706, 'learning_rate': 2.3840e-05, 'epoch': 1.54} 05/11/2024 22:03:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1706, 'learning_rate': 2.3800e-05, 'epoch': 1.55} 05/11/2024 22:03:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0949, 'learning_rate': 2.3760e-05, 'epoch': 1.55} 05/11/2024 22:04:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1651, 'learning_rate': 2.3720e-05, 'epoch': 1.55} 05/11/2024 22:04:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0752, 'learning_rate': 2.3680e-05, 'epoch': 1.55} 05/11/2024 22:04:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2040, 'learning_rate': 2.3639e-05, 'epoch': 1.55} 05/11/2024 22:04:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2201, 'learning_rate': 2.3599e-05, 'epoch': 1.55} 05/11/2024 22:04:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2381, 'learning_rate': 2.3559e-05, 'epoch': 1.56} 05/11/2024 22:04:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2910, 'learning_rate': 2.3519e-05, 'epoch': 1.56} 05/11/2024 22:05:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1140, 'learning_rate': 2.3478e-05, 'epoch': 1.56} 05/11/2024 22:05:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2755, 'learning_rate': 2.3438e-05, 'epoch': 1.56} 05/11/2024 22:05:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1355, 'learning_rate': 2.3398e-05, 'epoch': 1.56} 05/11/2024 22:05:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1396, 'learning_rate': 2.3358e-05, 'epoch': 1.56} 05/11/2024 22:05:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1287, 'learning_rate': 2.3318e-05, 'epoch': 1.56} 05/11/2024 22:06:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1653, 'learning_rate': 2.3278e-05, 'epoch': 1.57} 05/11/2024 22:06:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1403, 'learning_rate': 2.3237e-05, 'epoch': 1.57} 05/11/2024 22:06:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1604, 'learning_rate': 2.3197e-05, 'epoch': 1.57} 05/11/2024 22:06:22 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5100 05/11/2024 22:06:22 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:06:22 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:06:22 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5100/tokenizer_config.json 05/11/2024 22:06:22 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5100/special_tokens_map.json 05/11/2024 22:06:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1634, 'learning_rate': 2.3157e-05, 'epoch': 1.57} 05/11/2024 22:06:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.0262, 'learning_rate': 2.3117e-05, 'epoch': 1.57} 05/11/2024 22:06:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1561, 'learning_rate': 2.3077e-05, 'epoch': 1.57} 05/11/2024 22:07:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1065, 'learning_rate': 2.3037e-05, 'epoch': 1.58} 05/11/2024 22:07:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1071, 'learning_rate': 2.2996e-05, 'epoch': 1.58} 05/11/2024 22:07:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1964, 'learning_rate': 2.2956e-05, 'epoch': 1.58} 05/11/2024 22:07:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1061, 'learning_rate': 2.2916e-05, 'epoch': 1.58} 05/11/2024 22:07:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0035, 'learning_rate': 2.2876e-05, 'epoch': 1.58} 05/11/2024 22:07:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.2609, 'learning_rate': 2.2836e-05, 'epoch': 1.58} 05/11/2024 22:08:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.2609, 'learning_rate': 2.2796e-05, 'epoch': 1.58} 05/11/2024 22:08:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1503, 'learning_rate': 2.2756e-05, 'epoch': 1.59} 05/11/2024 22:08:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.0164, 'learning_rate': 2.2715e-05, 'epoch': 1.59} 05/11/2024 22:08:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1180, 'learning_rate': 2.2675e-05, 'epoch': 1.59} 05/11/2024 22:08:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1657, 'learning_rate': 2.2635e-05, 'epoch': 1.59} 05/11/2024 22:09:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1834, 'learning_rate': 2.2595e-05, 'epoch': 1.59} 05/11/2024 22:09:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1520, 'learning_rate': 2.2555e-05, 'epoch': 1.59} 05/11/2024 22:09:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.3024, 'learning_rate': 2.2515e-05, 'epoch': 1.60} 05/11/2024 22:09:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1682, 'learning_rate': 2.2475e-05, 'epoch': 1.60} 05/11/2024 22:09:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1026, 'learning_rate': 2.2435e-05, 'epoch': 1.60} 05/11/2024 22:09:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1451, 'learning_rate': 2.2395e-05, 'epoch': 1.60} 05/11/2024 22:09:54 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5200 05/11/2024 22:09:55 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:09:55 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:09:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5200/tokenizer_config.json 05/11/2024 22:09:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5200/special_tokens_map.json 05/11/2024 22:10:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9931, 'learning_rate': 2.2355e-05, 'epoch': 1.60} 05/11/2024 22:10:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.2305, 'learning_rate': 2.2315e-05, 'epoch': 1.60} 05/11/2024 22:10:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1902, 'learning_rate': 2.2275e-05, 'epoch': 1.60} 05/11/2024 22:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2040, 'learning_rate': 2.2235e-05, 'epoch': 1.61} 05/11/2024 22:10:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1242, 'learning_rate': 2.2195e-05, 'epoch': 1.61} 05/11/2024 22:10:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0447, 'learning_rate': 2.2155e-05, 'epoch': 1.61} 05/11/2024 22:11:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1645, 'learning_rate': 2.2115e-05, 'epoch': 1.61} 05/11/2024 22:11:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1268, 'learning_rate': 2.2075e-05, 'epoch': 1.61} 05/11/2024 22:11:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1256, 'learning_rate': 2.2035e-05, 'epoch': 1.61} 05/11/2024 22:11:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2046, 'learning_rate': 2.1995e-05, 'epoch': 1.62} 05/11/2024 22:11:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1503, 'learning_rate': 2.1955e-05, 'epoch': 1.62} 05/11/2024 22:11:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0835, 'learning_rate': 2.1915e-05, 'epoch': 1.62} 05/11/2024 22:12:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0705, 'learning_rate': 2.1875e-05, 'epoch': 1.62} 05/11/2024 22:12:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1984, 'learning_rate': 2.1835e-05, 'epoch': 1.62} 05/11/2024 22:12:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1258, 'learning_rate': 2.1795e-05, 'epoch': 1.62} 05/11/2024 22:12:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1313, 'learning_rate': 2.1755e-05, 'epoch': 1.62} 05/11/2024 22:12:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1325, 'learning_rate': 2.1715e-05, 'epoch': 1.63} 05/11/2024 22:12:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2157, 'learning_rate': 2.1675e-05, 'epoch': 1.63} 05/11/2024 22:13:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1402, 'learning_rate': 2.1635e-05, 'epoch': 1.63} 05/11/2024 22:13:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1031, 'learning_rate': 2.1595e-05, 'epoch': 1.63} 05/11/2024 22:13:17 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5300 05/11/2024 22:13:18 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:13:18 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:13:18 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5300/tokenizer_config.json 05/11/2024 22:13:18 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5300/special_tokens_map.json 05/11/2024 22:13:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1826, 'learning_rate': 2.1555e-05, 'epoch': 1.63} 05/11/2024 22:13:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1256, 'learning_rate': 2.1515e-05, 'epoch': 1.63} 05/11/2024 22:13:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2348, 'learning_rate': 2.1475e-05, 'epoch': 1.64} 05/11/2024 22:13:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1109, 'learning_rate': 2.1436e-05, 'epoch': 1.64} 05/11/2024 22:14:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0686, 'learning_rate': 2.1396e-05, 'epoch': 1.64} 05/11/2024 22:14:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9758, 'learning_rate': 2.1356e-05, 'epoch': 1.64} 05/11/2024 22:14:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.0558, 'learning_rate': 2.1316e-05, 'epoch': 1.64} 05/11/2024 22:14:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1067, 'learning_rate': 2.1276e-05, 'epoch': 1.64} 05/11/2024 22:14:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.0579, 'learning_rate': 2.1236e-05, 'epoch': 1.64} 05/11/2024 22:14:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.2685, 'learning_rate': 2.1197e-05, 'epoch': 1.65} 05/11/2024 22:15:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1172, 'learning_rate': 2.1157e-05, 'epoch': 1.65} 05/11/2024 22:15:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.0846, 'learning_rate': 2.1117e-05, 'epoch': 1.65} 05/11/2024 22:15:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1903, 'learning_rate': 2.1077e-05, 'epoch': 1.65} 05/11/2024 22:15:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0995, 'learning_rate': 2.1037e-05, 'epoch': 1.65} 05/11/2024 22:15:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1283, 'learning_rate': 2.0998e-05, 'epoch': 1.65} 05/11/2024 22:15:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1127, 'learning_rate': 2.0958e-05, 'epoch': 1.66} 05/11/2024 22:16:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0967, 'learning_rate': 2.0918e-05, 'epoch': 1.66} 05/11/2024 22:16:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2248, 'learning_rate': 2.0878e-05, 'epoch': 1.66} 05/11/2024 22:16:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1827, 'learning_rate': 2.0839e-05, 'epoch': 1.66} 05/11/2024 22:16:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0875, 'learning_rate': 2.0799e-05, 'epoch': 1.66} 05/11/2024 22:16:41 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5400 05/11/2024 22:16:42 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:16:42 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:16:42 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5400/tokenizer_config.json 05/11/2024 22:16:42 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5400/special_tokens_map.json 05/11/2024 22:16:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1525, 'learning_rate': 2.0759e-05, 'epoch': 1.66} 05/11/2024 22:17:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1863, 'learning_rate': 2.0720e-05, 'epoch': 1.66} 05/11/2024 22:17:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.8931, 'learning_rate': 2.0680e-05, 'epoch': 1.67} 05/11/2024 22:17:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.9931, 'learning_rate': 2.0640e-05, 'epoch': 1.67} 05/11/2024 22:17:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2278, 'learning_rate': 2.0601e-05, 'epoch': 1.67} 05/11/2024 22:17:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.2236, 'learning_rate': 2.0561e-05, 'epoch': 1.67} 05/11/2024 22:17:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2029, 'learning_rate': 2.0521e-05, 'epoch': 1.67} 05/11/2024 22:18:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1809, 'learning_rate': 2.0482e-05, 'epoch': 1.67} 05/11/2024 22:18:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1373, 'learning_rate': 2.0442e-05, 'epoch': 1.68} 05/11/2024 22:18:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1215, 'learning_rate': 2.0403e-05, 'epoch': 1.68} 05/11/2024 22:18:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.0841, 'learning_rate': 2.0363e-05, 'epoch': 1.68} 05/11/2024 22:18:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0182, 'learning_rate': 2.0323e-05, 'epoch': 1.68} 05/11/2024 22:18:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1266, 'learning_rate': 2.0284e-05, 'epoch': 1.68} 05/11/2024 22:19:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.0757, 'learning_rate': 2.0244e-05, 'epoch': 1.68} 05/11/2024 22:19:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0888, 'learning_rate': 2.0205e-05, 'epoch': 1.68} 05/11/2024 22:19:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1195, 'learning_rate': 2.0165e-05, 'epoch': 1.69} 05/11/2024 22:19:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2103, 'learning_rate': 2.0126e-05, 'epoch': 1.69} 05/11/2024 22:19:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1606, 'learning_rate': 2.0086e-05, 'epoch': 1.69} 05/11/2024 22:19:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1806, 'learning_rate': 2.0047e-05, 'epoch': 1.69} 05/11/2024 22:20:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2071, 'learning_rate': 2.0007e-05, 'epoch': 1.69} 05/11/2024 22:20:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5500 05/11/2024 22:20:08 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:20:08 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:20:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5500/tokenizer_config.json 05/11/2024 22:20:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5500/special_tokens_map.json 05/11/2024 22:20:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1524, 'learning_rate': 1.9968e-05, 'epoch': 1.69} 05/11/2024 22:20:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1581, 'learning_rate': 1.9928e-05, 'epoch': 1.70} 05/11/2024 22:20:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1191, 'learning_rate': 1.9889e-05, 'epoch': 1.70} 05/11/2024 22:20:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0007, 'learning_rate': 1.9849e-05, 'epoch': 1.70} 05/11/2024 22:21:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1451, 'learning_rate': 1.9810e-05, 'epoch': 1.70} 05/11/2024 22:21:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1493, 'learning_rate': 1.9771e-05, 'epoch': 1.70} 05/11/2024 22:21:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1549, 'learning_rate': 1.9731e-05, 'epoch': 1.70} 05/11/2024 22:21:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2842, 'learning_rate': 1.9692e-05, 'epoch': 1.70} 05/11/2024 22:21:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.9822, 'learning_rate': 1.9653e-05, 'epoch': 1.71} 05/11/2024 22:21:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1677, 'learning_rate': 1.9613e-05, 'epoch': 1.71} 05/11/2024 22:22:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1473, 'learning_rate': 1.9574e-05, 'epoch': 1.71} 05/11/2024 22:22:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1344, 'learning_rate': 1.9535e-05, 'epoch': 1.71} 05/11/2024 22:22:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1523, 'learning_rate': 1.9495e-05, 'epoch': 1.71} 05/11/2024 22:22:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2694, 'learning_rate': 1.9456e-05, 'epoch': 1.71} 05/11/2024 22:22:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2023, 'learning_rate': 1.9417e-05, 'epoch': 1.72} 05/11/2024 22:22:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1340, 'learning_rate': 1.9378e-05, 'epoch': 1.72} 05/11/2024 22:23:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1758, 'learning_rate': 1.9338e-05, 'epoch': 1.72} 05/11/2024 22:23:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0628, 'learning_rate': 1.9299e-05, 'epoch': 1.72} 05/11/2024 22:23:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.0998, 'learning_rate': 1.9260e-05, 'epoch': 1.72} 05/11/2024 22:23:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1529, 'learning_rate': 1.9221e-05, 'epoch': 1.72} 05/11/2024 22:23:37 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5600 05/11/2024 22:23:38 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:23:38 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:23:38 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5600/tokenizer_config.json 05/11/2024 22:23:38 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5600/special_tokens_map.json 05/11/2024 22:23:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1795, 'learning_rate': 1.9181e-05, 'epoch': 1.72} 05/11/2024 22:23:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1547, 'learning_rate': 1.9142e-05, 'epoch': 1.73} 05/11/2024 22:24:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1300, 'learning_rate': 1.9103e-05, 'epoch': 1.73} 05/11/2024 22:24:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.2422, 'learning_rate': 1.9064e-05, 'epoch': 1.73} 05/11/2024 22:24:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2718, 'learning_rate': 1.9025e-05, 'epoch': 1.73} 05/11/2024 22:24:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.0266, 'learning_rate': 1.8986e-05, 'epoch': 1.73} 05/11/2024 22:24:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0988, 'learning_rate': 1.8947e-05, 'epoch': 1.73} 05/11/2024 22:25:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2839, 'learning_rate': 1.8908e-05, 'epoch': 1.74} 05/11/2024 22:25:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0840, 'learning_rate': 1.8869e-05, 'epoch': 1.74} 05/11/2024 22:25:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1079, 'learning_rate': 1.8830e-05, 'epoch': 1.74} 05/11/2024 22:25:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0559, 'learning_rate': 1.8791e-05, 'epoch': 1.74} 05/11/2024 22:25:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1889, 'learning_rate': 1.8752e-05, 'epoch': 1.74} 05/11/2024 22:25:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1461, 'learning_rate': 1.8713e-05, 'epoch': 1.74} 05/11/2024 22:26:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1877, 'learning_rate': 1.8674e-05, 'epoch': 1.74} 05/11/2024 22:26:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2214, 'learning_rate': 1.8635e-05, 'epoch': 1.75} 05/11/2024 22:26:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2095, 'learning_rate': 1.8596e-05, 'epoch': 1.75} 05/11/2024 22:26:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0836, 'learning_rate': 1.8557e-05, 'epoch': 1.75} 05/11/2024 22:26:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1866, 'learning_rate': 1.8518e-05, 'epoch': 1.75} 05/11/2024 22:26:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2131, 'learning_rate': 1.8479e-05, 'epoch': 1.75} 05/11/2024 22:27:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.2623, 'learning_rate': 1.8440e-05, 'epoch': 1.75} 05/11/2024 22:27:06 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5700 05/11/2024 22:27:06 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:27:06 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:27:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5700/tokenizer_config.json 05/11/2024 22:27:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5700/special_tokens_map.json 05/11/2024 22:27:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.2044, 'learning_rate': 1.8401e-05, 'epoch': 1.76} 05/11/2024 22:27:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1520, 'learning_rate': 1.8362e-05, 'epoch': 1.76} 05/11/2024 22:27:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2001, 'learning_rate': 1.8324e-05, 'epoch': 1.76} 05/11/2024 22:27:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2019, 'learning_rate': 1.8285e-05, 'epoch': 1.76} 05/11/2024 22:27:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1155, 'learning_rate': 1.8246e-05, 'epoch': 1.76} 05/11/2024 22:28:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2080, 'learning_rate': 1.8207e-05, 'epoch': 1.76} 05/11/2024 22:28:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2334, 'learning_rate': 1.8168e-05, 'epoch': 1.76} 05/11/2024 22:28:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0867, 'learning_rate': 1.8130e-05, 'epoch': 1.77} 05/11/2024 22:28:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.2453, 'learning_rate': 1.8091e-05, 'epoch': 1.77} 05/11/2024 22:28:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1704, 'learning_rate': 1.8052e-05, 'epoch': 1.77} 05/11/2024 22:29:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2105, 'learning_rate': 1.8014e-05, 'epoch': 1.77} 05/11/2024 22:29:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2076, 'learning_rate': 1.7975e-05, 'epoch': 1.77} 05/11/2024 22:29:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1031, 'learning_rate': 1.7936e-05, 'epoch': 1.77} 05/11/2024 22:29:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1328, 'learning_rate': 1.7898e-05, 'epoch': 1.78} 05/11/2024 22:29:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1613, 'learning_rate': 1.7859e-05, 'epoch': 1.78} 05/11/2024 22:29:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1835, 'learning_rate': 1.7820e-05, 'epoch': 1.78} 05/11/2024 22:30:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1883, 'learning_rate': 1.7782e-05, 'epoch': 1.78} 05/11/2024 22:30:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0919, 'learning_rate': 1.7743e-05, 'epoch': 1.78} 05/11/2024 22:30:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2349, 'learning_rate': 1.7705e-05, 'epoch': 1.78} 05/11/2024 22:30:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0932, 'learning_rate': 1.7666e-05, 'epoch': 1.78} 05/11/2024 22:30:35 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5800 05/11/2024 22:30:36 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:30:36 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:30:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5800/tokenizer_config.json 05/11/2024 22:30:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5800/special_tokens_map.json 05/11/2024 22:30:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1284, 'learning_rate': 1.7628e-05, 'epoch': 1.79} 05/11/2024 22:30:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1492, 'learning_rate': 1.7589e-05, 'epoch': 1.79} 05/11/2024 22:31:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1348, 'learning_rate': 1.7551e-05, 'epoch': 1.79} 05/11/2024 22:31:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1805, 'learning_rate': 1.7512e-05, 'epoch': 1.79} 05/11/2024 22:31:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2076, 'learning_rate': 1.7474e-05, 'epoch': 1.79} 05/11/2024 22:31:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1641, 'learning_rate': 1.7436e-05, 'epoch': 1.79} 05/11/2024 22:31:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.0551, 'learning_rate': 1.7397e-05, 'epoch': 1.80} 05/11/2024 22:32:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2383, 'learning_rate': 1.7359e-05, 'epoch': 1.80} 05/11/2024 22:32:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1292, 'learning_rate': 1.7321e-05, 'epoch': 1.80} 05/11/2024 22:32:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1165, 'learning_rate': 1.7282e-05, 'epoch': 1.80} 05/11/2024 22:32:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1018, 'learning_rate': 1.7244e-05, 'epoch': 1.80} 05/11/2024 22:32:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2132, 'learning_rate': 1.7206e-05, 'epoch': 1.80} 05/11/2024 22:32:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1987, 'learning_rate': 1.7167e-05, 'epoch': 1.80} 05/11/2024 22:33:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1942, 'learning_rate': 1.7129e-05, 'epoch': 1.81} 05/11/2024 22:33:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2246, 'learning_rate': 1.7091e-05, 'epoch': 1.81} 05/11/2024 22:33:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1942, 'learning_rate': 1.7053e-05, 'epoch': 1.81} 05/11/2024 22:33:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1698, 'learning_rate': 1.7015e-05, 'epoch': 1.81} 05/11/2024 22:33:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1950, 'learning_rate': 1.6976e-05, 'epoch': 1.81} 05/11/2024 22:33:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2909, 'learning_rate': 1.6938e-05, 'epoch': 1.81} 05/11/2024 22:34:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1238, 'learning_rate': 1.6900e-05, 'epoch': 1.82} 05/11/2024 22:34:06 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5900 05/11/2024 22:34:06 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:34:06 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:34:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5900/tokenizer_config.json 05/11/2024 22:34:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-5900/special_tokens_map.json 05/11/2024 22:34:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2359, 'learning_rate': 1.6862e-05, 'epoch': 1.82} 05/11/2024 22:34:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1205, 'learning_rate': 1.6824e-05, 'epoch': 1.82} 05/11/2024 22:34:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1741, 'learning_rate': 1.6786e-05, 'epoch': 1.82} 05/11/2024 22:34:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1854, 'learning_rate': 1.6756e-05, 'epoch': 1.82} 05/11/2024 22:34:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1473, 'learning_rate': 1.6718e-05, 'epoch': 1.82} 05/11/2024 22:35:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1192, 'learning_rate': 1.6680e-05, 'epoch': 1.82} 05/11/2024 22:35:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1244, 'learning_rate': 1.6642e-05, 'epoch': 1.83} 05/11/2024 22:35:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.0861, 'learning_rate': 1.6604e-05, 'epoch': 1.83} 05/11/2024 22:35:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1462, 'learning_rate': 1.6566e-05, 'epoch': 1.83} 05/11/2024 22:35:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0422, 'learning_rate': 1.6528e-05, 'epoch': 1.83} 05/11/2024 22:36:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1840, 'learning_rate': 1.6490e-05, 'epoch': 1.83} 05/11/2024 22:36:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1303, 'learning_rate': 1.6452e-05, 'epoch': 1.83} 05/11/2024 22:36:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0967, 'learning_rate': 1.6414e-05, 'epoch': 1.84} 05/11/2024 22:36:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1405, 'learning_rate': 1.6376e-05, 'epoch': 1.84} 05/11/2024 22:36:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.0661, 'learning_rate': 1.6339e-05, 'epoch': 1.84} 05/11/2024 22:36:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1432, 'learning_rate': 1.6301e-05, 'epoch': 1.84} 05/11/2024 22:37:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1303, 'learning_rate': 1.6263e-05, 'epoch': 1.84} 05/11/2024 22:37:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1770, 'learning_rate': 1.6225e-05, 'epoch': 1.84} 05/11/2024 22:37:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1667, 'learning_rate': 1.6188e-05, 'epoch': 1.84} 05/11/2024 22:37:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0883, 'learning_rate': 1.6150e-05, 'epoch': 1.85} 05/11/2024 22:37:35 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6000 05/11/2024 22:37:36 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:37:36 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:37:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6000/tokenizer_config.json 05/11/2024 22:37:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6000/special_tokens_map.json 05/11/2024 22:37:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1184, 'learning_rate': 1.6112e-05, 'epoch': 1.85} 05/11/2024 22:37:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2097, 'learning_rate': 1.6075e-05, 'epoch': 1.85} 05/11/2024 22:38:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1738, 'learning_rate': 1.6037e-05, 'epoch': 1.85} 05/11/2024 22:38:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1160, 'learning_rate': 1.5999e-05, 'epoch': 1.85} 05/11/2024 22:38:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1432, 'learning_rate': 1.5962e-05, 'epoch': 1.85} 05/11/2024 22:38:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1429, 'learning_rate': 1.5924e-05, 'epoch': 1.86} 05/11/2024 22:38:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1284, 'learning_rate': 1.5887e-05, 'epoch': 1.86} 05/11/2024 22:38:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1124, 'learning_rate': 1.5849e-05, 'epoch': 1.86} 05/11/2024 22:39:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1730, 'learning_rate': 1.5812e-05, 'epoch': 1.86} 05/11/2024 22:39:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.3157, 'learning_rate': 1.5774e-05, 'epoch': 1.86} 05/11/2024 22:39:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0986, 'learning_rate': 1.5737e-05, 'epoch': 1.86} 05/11/2024 22:39:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2179, 'learning_rate': 1.5700e-05, 'epoch': 1.86} 05/11/2024 22:39:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0728, 'learning_rate': 1.5662e-05, 'epoch': 1.87} 05/11/2024 22:40:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1914, 'learning_rate': 1.5625e-05, 'epoch': 1.87} 05/11/2024 22:40:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2429, 'learning_rate': 1.5588e-05, 'epoch': 1.87} 05/11/2024 22:40:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0742, 'learning_rate': 1.5550e-05, 'epoch': 1.87} 05/11/2024 22:40:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1999, 'learning_rate': 1.5513e-05, 'epoch': 1.87} 05/11/2024 22:40:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1117, 'learning_rate': 1.5476e-05, 'epoch': 1.87} 05/11/2024 22:40:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0828, 'learning_rate': 1.5438e-05, 'epoch': 1.88} 05/11/2024 22:41:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0189, 'learning_rate': 1.5401e-05, 'epoch': 1.88} 05/11/2024 22:41:04 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6100 05/11/2024 22:41:05 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:41:05 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:41:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6100/tokenizer_config.json 05/11/2024 22:41:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6100/special_tokens_map.json 05/11/2024 22:41:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0945, 'learning_rate': 1.5364e-05, 'epoch': 1.88} 05/11/2024 22:41:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2050, 'learning_rate': 1.5327e-05, 'epoch': 1.88} 05/11/2024 22:41:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1512, 'learning_rate': 1.5290e-05, 'epoch': 1.88} 05/11/2024 22:41:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1126, 'learning_rate': 1.5253e-05, 'epoch': 1.88} 05/11/2024 22:41:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1858, 'learning_rate': 1.5216e-05, 'epoch': 1.88} 05/11/2024 22:42:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.0117, 'learning_rate': 1.5179e-05, 'epoch': 1.89} 05/11/2024 22:42:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1256, 'learning_rate': 1.5142e-05, 'epoch': 1.89} 05/11/2024 22:42:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0737, 'learning_rate': 1.5105e-05, 'epoch': 1.89} 05/11/2024 22:42:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1341, 'learning_rate': 1.5068e-05, 'epoch': 1.89} 05/11/2024 22:42:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0732, 'learning_rate': 1.5031e-05, 'epoch': 1.89} 05/11/2024 22:42:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2290, 'learning_rate': 1.4994e-05, 'epoch': 1.89} 05/11/2024 22:43:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1160, 'learning_rate': 1.4957e-05, 'epoch': 1.90} 05/11/2024 22:43:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1457, 'learning_rate': 1.4920e-05, 'epoch': 1.90} 05/11/2024 22:43:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1197, 'learning_rate': 1.4883e-05, 'epoch': 1.90} 05/11/2024 22:43:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1429, 'learning_rate': 1.4846e-05, 'epoch': 1.90} 05/11/2024 22:43:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1013, 'learning_rate': 1.4810e-05, 'epoch': 1.90} 05/11/2024 22:43:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0995, 'learning_rate': 1.4773e-05, 'epoch': 1.90} 05/11/2024 22:44:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.0569, 'learning_rate': 1.4736e-05, 'epoch': 1.90} 05/11/2024 22:44:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1816, 'learning_rate': 1.4699e-05, 'epoch': 1.91} 05/11/2024 22:44:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1818, 'learning_rate': 1.4663e-05, 'epoch': 1.91} 05/11/2024 22:44:29 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6200 05/11/2024 22:44:30 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:44:30 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:44:30 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6200/tokenizer_config.json 05/11/2024 22:44:30 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6200/special_tokens_map.json 05/11/2024 22:44:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.3359, 'learning_rate': 1.4626e-05, 'epoch': 1.91} 05/11/2024 22:44:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1615, 'learning_rate': 1.4589e-05, 'epoch': 1.91} 05/11/2024 22:45:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1065, 'learning_rate': 1.4553e-05, 'epoch': 1.91} 05/11/2024 22:45:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0822, 'learning_rate': 1.4516e-05, 'epoch': 1.91} 05/11/2024 22:45:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1871, 'learning_rate': 1.4480e-05, 'epoch': 1.92} 05/11/2024 22:45:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2001, 'learning_rate': 1.4443e-05, 'epoch': 1.92} 05/11/2024 22:45:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0825, 'learning_rate': 1.4407e-05, 'epoch': 1.92} 05/11/2024 22:45:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2244, 'learning_rate': 1.4370e-05, 'epoch': 1.92} 05/11/2024 22:46:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.2298, 'learning_rate': 1.4334e-05, 'epoch': 1.92} 05/11/2024 22:46:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2057, 'learning_rate': 1.4297e-05, 'epoch': 1.92} 05/11/2024 22:46:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1947, 'learning_rate': 1.4261e-05, 'epoch': 1.92} 05/11/2024 22:46:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1204, 'learning_rate': 1.4225e-05, 'epoch': 1.93} 05/11/2024 22:46:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2412, 'learning_rate': 1.4188e-05, 'epoch': 1.93} 05/11/2024 22:46:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2189, 'learning_rate': 1.4152e-05, 'epoch': 1.93} 05/11/2024 22:47:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2044, 'learning_rate': 1.4116e-05, 'epoch': 1.93} 05/11/2024 22:47:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1870, 'learning_rate': 1.4079e-05, 'epoch': 1.93} 05/11/2024 22:47:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1532, 'learning_rate': 1.4043e-05, 'epoch': 1.93} 05/11/2024 22:47:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1056, 'learning_rate': 1.4007e-05, 'epoch': 1.94} 05/11/2024 22:47:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1568, 'learning_rate': 1.3971e-05, 'epoch': 1.94} 05/11/2024 22:47:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1126, 'learning_rate': 1.3935e-05, 'epoch': 1.94} 05/11/2024 22:47:59 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6300 05/11/2024 22:48:00 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:48:00 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:48:00 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6300/tokenizer_config.json 05/11/2024 22:48:00 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6300/special_tokens_map.json 05/11/2024 22:48:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0468, 'learning_rate': 1.3899e-05, 'epoch': 1.94} 05/11/2024 22:48:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1714, 'learning_rate': 1.3862e-05, 'epoch': 1.94} 05/11/2024 22:48:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1858, 'learning_rate': 1.3826e-05, 'epoch': 1.94} 05/11/2024 22:48:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1779, 'learning_rate': 1.3790e-05, 'epoch': 1.94} 05/11/2024 22:48:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1468, 'learning_rate': 1.3754e-05, 'epoch': 1.95} 05/11/2024 22:48:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1402, 'learning_rate': 1.3718e-05, 'epoch': 1.95} 05/11/2024 22:49:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.0989, 'learning_rate': 1.3683e-05, 'epoch': 1.95} 05/11/2024 22:49:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1700, 'learning_rate': 1.3647e-05, 'epoch': 1.95} 05/11/2024 22:49:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1466, 'learning_rate': 1.3611e-05, 'epoch': 1.95} 05/11/2024 22:49:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2066, 'learning_rate': 1.3575e-05, 'epoch': 1.95} 05/11/2024 22:49:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0777, 'learning_rate': 1.3539e-05, 'epoch': 1.96} 05/11/2024 22:50:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2807, 'learning_rate': 1.3503e-05, 'epoch': 1.96} 05/11/2024 22:50:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1896, 'learning_rate': 1.3468e-05, 'epoch': 1.96} 05/11/2024 22:50:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1885, 'learning_rate': 1.3432e-05, 'epoch': 1.96} 05/11/2024 22:50:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1905, 'learning_rate': 1.3396e-05, 'epoch': 1.96} 05/11/2024 22:50:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1287, 'learning_rate': 1.3361e-05, 'epoch': 1.96} 05/11/2024 22:50:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.2626, 'learning_rate': 1.3325e-05, 'epoch': 1.96} 05/11/2024 22:51:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0611, 'learning_rate': 1.3289e-05, 'epoch': 1.97} 05/11/2024 22:51:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1448, 'learning_rate': 1.3254e-05, 'epoch': 1.97} 05/11/2024 22:51:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.0762, 'learning_rate': 1.3218e-05, 'epoch': 1.97} 05/11/2024 22:51:27 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6400 05/11/2024 22:51:27 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:51:27 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:51:27 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6400/tokenizer_config.json 05/11/2024 22:51:27 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6400/special_tokens_map.json 05/11/2024 22:51:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.0773, 'learning_rate': 1.3183e-05, 'epoch': 1.97} 05/11/2024 22:51:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0408, 'learning_rate': 1.3147e-05, 'epoch': 1.97} 05/11/2024 22:51:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1046, 'learning_rate': 1.3112e-05, 'epoch': 1.97} 05/11/2024 22:52:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1332, 'learning_rate': 1.3076e-05, 'epoch': 1.98} 05/11/2024 22:52:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1041, 'learning_rate': 1.3041e-05, 'epoch': 1.98} 05/11/2024 22:52:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.3129, 'learning_rate': 1.3006e-05, 'epoch': 1.98} 05/11/2024 22:52:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2037, 'learning_rate': 1.2970e-05, 'epoch': 1.98} 05/11/2024 22:52:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1983, 'learning_rate': 1.2935e-05, 'epoch': 1.98} 05/11/2024 22:53:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1271, 'learning_rate': 1.2900e-05, 'epoch': 1.98} 05/11/2024 22:53:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1675, 'learning_rate': 1.2864e-05, 'epoch': 1.98} 05/11/2024 22:53:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.0849, 'learning_rate': 1.2829e-05, 'epoch': 1.99} 05/11/2024 22:53:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0848, 'learning_rate': 1.2794e-05, 'epoch': 1.99} 05/11/2024 22:53:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1289, 'learning_rate': 1.2759e-05, 'epoch': 1.99} 05/11/2024 22:53:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1198, 'learning_rate': 1.2724e-05, 'epoch': 1.99} 05/11/2024 22:54:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1318, 'learning_rate': 1.2689e-05, 'epoch': 1.99} 05/11/2024 22:54:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.2472, 'learning_rate': 1.2654e-05, 'epoch': 1.99} 05/11/2024 22:54:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2089, 'learning_rate': 1.2619e-05, 'epoch': 2.00} 05/11/2024 22:54:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2677, 'learning_rate': 1.2584e-05, 'epoch': 2.00} 05/11/2024 22:54:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2140, 'learning_rate': 1.2549e-05, 'epoch': 2.00} 05/11/2024 22:55:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1746, 'learning_rate': 1.2514e-05, 'epoch': 2.00} 05/11/2024 22:55:01 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6500 05/11/2024 22:55:01 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:55:01 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:55:01 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6500/tokenizer_config.json 05/11/2024 22:55:01 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6500/special_tokens_map.json 05/11/2024 22:55:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0032, 'learning_rate': 1.2479e-05, 'epoch': 2.00} 05/11/2024 22:55:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.0929, 'learning_rate': 1.2444e-05, 'epoch': 2.00} 05/11/2024 22:55:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1196, 'learning_rate': 1.2409e-05, 'epoch': 2.00} 05/11/2024 22:55:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1062, 'learning_rate': 1.2375e-05, 'epoch': 2.01} 05/11/2024 22:55:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1301, 'learning_rate': 1.2340e-05, 'epoch': 2.01} 05/11/2024 22:56:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2008, 'learning_rate': 1.2305e-05, 'epoch': 2.01} 05/11/2024 22:56:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1736, 'learning_rate': 1.2270e-05, 'epoch': 2.01} 05/11/2024 22:56:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1997, 'learning_rate': 1.2236e-05, 'epoch': 2.01} 05/11/2024 22:56:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0245, 'learning_rate': 1.2201e-05, 'epoch': 2.01} 05/11/2024 22:56:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.0753, 'learning_rate': 1.2167e-05, 'epoch': 2.02} 05/11/2024 22:56:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1651, 'learning_rate': 1.2132e-05, 'epoch': 2.02} 05/11/2024 22:57:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1217, 'learning_rate': 1.2098e-05, 'epoch': 2.02} 05/11/2024 22:57:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1589, 'learning_rate': 1.2063e-05, 'epoch': 2.02} 05/11/2024 22:57:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1789, 'learning_rate': 1.2029e-05, 'epoch': 2.02} 05/11/2024 22:57:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2168, 'learning_rate': 1.1994e-05, 'epoch': 2.02} 05/11/2024 22:57:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0566, 'learning_rate': 1.1960e-05, 'epoch': 2.02} 05/11/2024 22:57:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0522, 'learning_rate': 1.1926e-05, 'epoch': 2.03} 05/11/2024 22:58:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.0596, 'learning_rate': 1.1891e-05, 'epoch': 2.03} 05/11/2024 22:58:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.2102, 'learning_rate': 1.1857e-05, 'epoch': 2.03} 05/11/2024 22:58:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1225, 'learning_rate': 1.1823e-05, 'epoch': 2.03} 05/11/2024 22:58:28 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6600 05/11/2024 22:58:29 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 22:58:29 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 22:58:29 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6600/tokenizer_config.json 05/11/2024 22:58:29 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6600/special_tokens_map.json 05/11/2024 22:58:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0554, 'learning_rate': 1.1788e-05, 'epoch': 2.03} 05/11/2024 22:58:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1563, 'learning_rate': 1.1754e-05, 'epoch': 2.03} 05/11/2024 22:59:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2274, 'learning_rate': 1.1720e-05, 'epoch': 2.04} 05/11/2024 22:59:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1378, 'learning_rate': 1.1686e-05, 'epoch': 2.04} 05/11/2024 22:59:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1663, 'learning_rate': 1.1652e-05, 'epoch': 2.04} 05/11/2024 22:59:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1084, 'learning_rate': 1.1618e-05, 'epoch': 2.04} 05/11/2024 22:59:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.2261, 'learning_rate': 1.1584e-05, 'epoch': 2.04} 05/11/2024 22:59:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0844, 'learning_rate': 1.1550e-05, 'epoch': 2.04} 05/11/2024 23:00:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2545, 'learning_rate': 1.1516e-05, 'epoch': 2.04} 05/11/2024 23:00:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1272, 'learning_rate': 1.1482e-05, 'epoch': 2.05} 05/11/2024 23:00:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2560, 'learning_rate': 1.1448e-05, 'epoch': 2.05} 05/11/2024 23:00:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1881, 'learning_rate': 1.1414e-05, 'epoch': 2.05} 05/11/2024 23:00:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1412, 'learning_rate': 1.1381e-05, 'epoch': 2.05} 05/11/2024 23:00:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.0200, 'learning_rate': 1.1347e-05, 'epoch': 2.05} 05/11/2024 23:01:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.0248, 'learning_rate': 1.1313e-05, 'epoch': 2.05} 05/11/2024 23:01:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1300, 'learning_rate': 1.1279e-05, 'epoch': 2.06} 05/11/2024 23:01:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1210, 'learning_rate': 1.1246e-05, 'epoch': 2.06} 05/11/2024 23:01:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0533, 'learning_rate': 1.1212e-05, 'epoch': 2.06} 05/11/2024 23:01:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1413, 'learning_rate': 1.1179e-05, 'epoch': 2.06} 05/11/2024 23:02:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2047, 'learning_rate': 1.1145e-05, 'epoch': 2.06} 05/11/2024 23:02:02 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6700 05/11/2024 23:02:03 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:02:03 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:02:03 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6700/tokenizer_config.json 05/11/2024 23:02:03 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6700/special_tokens_map.json 05/11/2024 23:02:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1422, 'learning_rate': 1.1112e-05, 'epoch': 2.06} 05/11/2024 23:02:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1967, 'learning_rate': 1.1078e-05, 'epoch': 2.06} 05/11/2024 23:02:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1861, 'learning_rate': 1.1045e-05, 'epoch': 2.07} 05/11/2024 23:02:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1204, 'learning_rate': 1.1011e-05, 'epoch': 2.07} 05/11/2024 23:02:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1936, 'learning_rate': 1.0978e-05, 'epoch': 2.07} 05/11/2024 23:03:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1004, 'learning_rate': 1.0945e-05, 'epoch': 2.07} 05/11/2024 23:03:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1286, 'learning_rate': 1.0911e-05, 'epoch': 2.07} 05/11/2024 23:03:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1727, 'learning_rate': 1.0878e-05, 'epoch': 2.07} 05/11/2024 23:03:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1221, 'learning_rate': 1.0845e-05, 'epoch': 2.08} 05/11/2024 23:03:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1319, 'learning_rate': 1.0812e-05, 'epoch': 2.08} 05/11/2024 23:03:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1622, 'learning_rate': 1.0778e-05, 'epoch': 2.08} 05/11/2024 23:04:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2318, 'learning_rate': 1.0745e-05, 'epoch': 2.08} 05/11/2024 23:04:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9846, 'learning_rate': 1.0712e-05, 'epoch': 2.08} 05/11/2024 23:04:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1356, 'learning_rate': 1.0679e-05, 'epoch': 2.08} 05/11/2024 23:04:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0800, 'learning_rate': 1.0646e-05, 'epoch': 2.08} 05/11/2024 23:04:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1235, 'learning_rate': 1.0613e-05, 'epoch': 2.09} 05/11/2024 23:04:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1837, 'learning_rate': 1.0580e-05, 'epoch': 2.09} 05/11/2024 23:05:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2486, 'learning_rate': 1.0548e-05, 'epoch': 2.09} 05/11/2024 23:05:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2542, 'learning_rate': 1.0515e-05, 'epoch': 2.09} 05/11/2024 23:05:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1346, 'learning_rate': 1.0482e-05, 'epoch': 2.09} 05/11/2024 23:05:27 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6800 05/11/2024 23:05:28 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:05:28 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:05:28 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6800/tokenizer_config.json 05/11/2024 23:05:28 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6800/special_tokens_map.json 05/11/2024 23:05:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.0802, 'learning_rate': 1.0449e-05, 'epoch': 2.09} 05/11/2024 23:05:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1316, 'learning_rate': 1.0416e-05, 'epoch': 2.10} 05/11/2024 23:05:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1720, 'learning_rate': 1.0384e-05, 'epoch': 2.10} 05/11/2024 23:06:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1978, 'learning_rate': 1.0351e-05, 'epoch': 2.10} 05/11/2024 23:06:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1058, 'learning_rate': 1.0318e-05, 'epoch': 2.10} 05/11/2024 23:06:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0153, 'learning_rate': 1.0286e-05, 'epoch': 2.10} 05/11/2024 23:06:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1409, 'learning_rate': 1.0253e-05, 'epoch': 2.10} 05/11/2024 23:06:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.3659, 'learning_rate': 1.0221e-05, 'epoch': 2.10} 05/11/2024 23:07:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0986, 'learning_rate': 1.0188e-05, 'epoch': 2.11} 05/11/2024 23:07:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1785, 'learning_rate': 1.0156e-05, 'epoch': 2.11} 05/11/2024 23:07:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1677, 'learning_rate': 1.0123e-05, 'epoch': 2.11} 05/11/2024 23:07:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0557, 'learning_rate': 1.0091e-05, 'epoch': 2.11} 05/11/2024 23:07:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.9958, 'learning_rate': 1.0059e-05, 'epoch': 2.11} 05/11/2024 23:07:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0089, 'learning_rate': 1.0027e-05, 'epoch': 2.11} 05/11/2024 23:08:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0826, 'learning_rate': 9.9943e-06, 'epoch': 2.12} 05/11/2024 23:08:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0821, 'learning_rate': 9.9621e-06, 'epoch': 2.12} 05/11/2024 23:08:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0976, 'learning_rate': 9.9300e-06, 'epoch': 2.12} 05/11/2024 23:08:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.2036, 'learning_rate': 9.8979e-06, 'epoch': 2.12} 05/11/2024 23:08:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1875, 'learning_rate': 9.8658e-06, 'epoch': 2.12} 05/11/2024 23:08:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1701, 'learning_rate': 9.8337e-06, 'epoch': 2.12} 05/11/2024 23:08:53 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6900 05/11/2024 23:08:54 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:08:54 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:08:54 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6900/tokenizer_config.json 05/11/2024 23:08:54 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-6900/special_tokens_map.json 05/11/2024 23:09:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1895, 'learning_rate': 9.8017e-06, 'epoch': 2.12} 05/11/2024 23:09:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.2195, 'learning_rate': 9.7698e-06, 'epoch': 2.13} 05/11/2024 23:09:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2749, 'learning_rate': 9.7379e-06, 'epoch': 2.13} 05/11/2024 23:09:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1431, 'learning_rate': 9.7060e-06, 'epoch': 2.13} 05/11/2024 23:09:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1656, 'learning_rate': 9.6741e-06, 'epoch': 2.13} 05/11/2024 23:09:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1445, 'learning_rate': 9.6423e-06, 'epoch': 2.13} 05/11/2024 23:10:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1231, 'learning_rate': 9.6106e-06, 'epoch': 2.13} 05/11/2024 23:10:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0971, 'learning_rate': 9.5789e-06, 'epoch': 2.14} 05/11/2024 23:10:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2674, 'learning_rate': 9.5472e-06, 'epoch': 2.14} 05/11/2024 23:10:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1261, 'learning_rate': 9.5155e-06, 'epoch': 2.14} 05/11/2024 23:10:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.9968, 'learning_rate': 9.4839e-06, 'epoch': 2.14} 05/11/2024 23:10:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1008, 'learning_rate': 9.4524e-06, 'epoch': 2.14} 05/11/2024 23:11:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1100, 'learning_rate': 9.4209e-06, 'epoch': 2.14} 05/11/2024 23:11:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.2466, 'learning_rate': 9.3894e-06, 'epoch': 2.14} 05/11/2024 23:11:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2212, 'learning_rate': 9.3579e-06, 'epoch': 2.15} 05/11/2024 23:11:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1319, 'learning_rate': 9.3265e-06, 'epoch': 2.15} 05/11/2024 23:11:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2019, 'learning_rate': 9.2952e-06, 'epoch': 2.15} 05/11/2024 23:11:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2154, 'learning_rate': 9.2639e-06, 'epoch': 2.15} 05/11/2024 23:12:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1205, 'learning_rate': 9.2326e-06, 'epoch': 2.15} 05/11/2024 23:12:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0795, 'learning_rate': 9.2013e-06, 'epoch': 2.15} 05/11/2024 23:12:17 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7000 05/11/2024 23:12:17 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:12:17 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:12:17 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7000/tokenizer_config.json 05/11/2024 23:12:17 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7000/special_tokens_map.json 05/11/2024 23:12:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1504, 'learning_rate': 9.1702e-06, 'epoch': 2.16} 05/11/2024 23:12:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2082, 'learning_rate': 9.1390e-06, 'epoch': 2.16} 05/11/2024 23:12:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1485, 'learning_rate': 9.1079e-06, 'epoch': 2.16} 05/11/2024 23:12:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1636, 'learning_rate': 9.0768e-06, 'epoch': 2.16} 05/11/2024 23:13:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.2166, 'learning_rate': 9.0458e-06, 'epoch': 2.16} 05/11/2024 23:13:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0885, 'learning_rate': 9.0148e-06, 'epoch': 2.16} 05/11/2024 23:13:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2030, 'learning_rate': 8.9839e-06, 'epoch': 2.16} 05/11/2024 23:13:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.0762, 'learning_rate': 8.9529e-06, 'epoch': 2.17} 05/11/2024 23:13:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0562, 'learning_rate': 8.9221e-06, 'epoch': 2.17} 05/11/2024 23:13:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1853, 'learning_rate': 8.8913e-06, 'epoch': 2.17} 05/11/2024 23:14:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1117, 'learning_rate': 8.8605e-06, 'epoch': 2.17} 05/11/2024 23:14:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1283, 'learning_rate': 8.8297e-06, 'epoch': 2.17} 05/11/2024 23:14:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1936, 'learning_rate': 8.7990e-06, 'epoch': 2.17} 05/11/2024 23:14:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1856, 'learning_rate': 8.7684e-06, 'epoch': 2.18} 05/11/2024 23:14:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2375, 'learning_rate': 8.7378e-06, 'epoch': 2.18} 05/11/2024 23:15:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1860, 'learning_rate': 8.7072e-06, 'epoch': 2.18} 05/11/2024 23:15:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1079, 'learning_rate': 8.6767e-06, 'epoch': 2.18} 05/11/2024 23:15:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.0061, 'learning_rate': 8.6462e-06, 'epoch': 2.18} 05/11/2024 23:15:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0278, 'learning_rate': 8.6158e-06, 'epoch': 2.18} 05/11/2024 23:15:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1275, 'learning_rate': 8.5854e-06, 'epoch': 2.18} 05/11/2024 23:15:40 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7100 05/11/2024 23:15:41 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:15:41 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:15:41 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7100/tokenizer_config.json 05/11/2024 23:15:41 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7100/special_tokens_map.json 05/11/2024 23:15:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0621, 'learning_rate': 8.5550e-06, 'epoch': 2.19} 05/11/2024 23:16:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0334, 'learning_rate': 8.5247e-06, 'epoch': 2.19} 05/11/2024 23:16:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1259, 'learning_rate': 8.4944e-06, 'epoch': 2.19} 05/11/2024 23:16:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.2130, 'learning_rate': 8.4642e-06, 'epoch': 2.19} 05/11/2024 23:16:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1650, 'learning_rate': 8.4340e-06, 'epoch': 2.19} 05/11/2024 23:16:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1034, 'learning_rate': 8.4039e-06, 'epoch': 2.19} 05/11/2024 23:16:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0357, 'learning_rate': 8.3738e-06, 'epoch': 2.20} 05/11/2024 23:17:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0494, 'learning_rate': 8.3437e-06, 'epoch': 2.20} 05/11/2024 23:17:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1883, 'learning_rate': 8.3137e-06, 'epoch': 2.20} 05/11/2024 23:17:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0538, 'learning_rate': 8.2837e-06, 'epoch': 2.20} 05/11/2024 23:17:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1396, 'learning_rate': 8.2538e-06, 'epoch': 2.20} 05/11/2024 23:17:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2732, 'learning_rate': 8.2239e-06, 'epoch': 2.20} 05/11/2024 23:17:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0282, 'learning_rate': 8.1941e-06, 'epoch': 2.20} 05/11/2024 23:18:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0670, 'learning_rate': 8.1643e-06, 'epoch': 2.21} 05/11/2024 23:18:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1353, 'learning_rate': 8.1345e-06, 'epoch': 2.21} 05/11/2024 23:18:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0704, 'learning_rate': 8.1048e-06, 'epoch': 2.21} 05/11/2024 23:18:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1664, 'learning_rate': 8.0751e-06, 'epoch': 2.21} 05/11/2024 23:18:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2461, 'learning_rate': 8.0455e-06, 'epoch': 2.21} 05/11/2024 23:18:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.0664, 'learning_rate': 8.0159e-06, 'epoch': 2.21} 05/11/2024 23:19:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1450, 'learning_rate': 7.9864e-06, 'epoch': 2.22} 05/11/2024 23:19:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7200 05/11/2024 23:19:08 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:19:08 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:19:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7200/tokenizer_config.json 05/11/2024 23:19:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7200/special_tokens_map.json 05/11/2024 23:19:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0990, 'learning_rate': 7.9569e-06, 'epoch': 2.22} 05/11/2024 23:19:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2413, 'learning_rate': 7.9275e-06, 'epoch': 2.22} 05/11/2024 23:19:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2822, 'learning_rate': 7.8981e-06, 'epoch': 2.22} 05/11/2024 23:19:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0832, 'learning_rate': 7.8687e-06, 'epoch': 2.22} 05/11/2024 23:20:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1282, 'learning_rate': 7.8394e-06, 'epoch': 2.22} 05/11/2024 23:20:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2089, 'learning_rate': 7.8101e-06, 'epoch': 2.22} 05/11/2024 23:20:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0814, 'learning_rate': 7.7809e-06, 'epoch': 2.23} 05/11/2024 23:20:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1818, 'learning_rate': 7.7517e-06, 'epoch': 2.23} 05/11/2024 23:20:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2211, 'learning_rate': 7.7226e-06, 'epoch': 2.23} 05/11/2024 23:20:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0336, 'learning_rate': 7.6935e-06, 'epoch': 2.23} 05/11/2024 23:21:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1502, 'learning_rate': 7.6645e-06, 'epoch': 2.23} 05/11/2024 23:21:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1147, 'learning_rate': 7.6355e-06, 'epoch': 2.23} 05/11/2024 23:21:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1423, 'learning_rate': 7.6065e-06, 'epoch': 2.24} 05/11/2024 23:21:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2132, 'learning_rate': 7.5776e-06, 'epoch': 2.24} 05/11/2024 23:21:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.9920, 'learning_rate': 7.5487e-06, 'epoch': 2.24} 05/11/2024 23:21:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1199, 'learning_rate': 7.5199e-06, 'epoch': 2.24} 05/11/2024 23:22:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2796, 'learning_rate': 7.4912e-06, 'epoch': 2.24} 05/11/2024 23:22:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1254, 'learning_rate': 7.4624e-06, 'epoch': 2.24} 05/11/2024 23:22:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1200, 'learning_rate': 7.4338e-06, 'epoch': 2.24} 05/11/2024 23:22:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1136, 'learning_rate': 7.4051e-06, 'epoch': 2.25} 05/11/2024 23:22:40 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7300 05/11/2024 23:22:41 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:22:41 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:22:41 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7300/tokenizer_config.json 05/11/2024 23:22:41 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7300/special_tokens_map.json 05/11/2024 23:22:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0981, 'learning_rate': 7.3765e-06, 'epoch': 2.25} 05/11/2024 23:23:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.9997, 'learning_rate': 7.3480e-06, 'epoch': 2.25} 05/11/2024 23:23:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1327, 'learning_rate': 7.3195e-06, 'epoch': 2.25} 05/11/2024 23:23:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1047, 'learning_rate': 7.2910e-06, 'epoch': 2.25} 05/11/2024 23:23:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0588, 'learning_rate': 7.2626e-06, 'epoch': 2.25} 05/11/2024 23:23:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0701, 'learning_rate': 7.2343e-06, 'epoch': 2.26} 05/11/2024 23:23:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1340, 'learning_rate': 7.2059e-06, 'epoch': 2.26} 05/11/2024 23:24:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0544, 'learning_rate': 7.1777e-06, 'epoch': 2.26} 05/11/2024 23:24:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1593, 'learning_rate': 7.1495e-06, 'epoch': 2.26} 05/11/2024 23:24:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1830, 'learning_rate': 7.1213e-06, 'epoch': 2.26} 05/11/2024 23:24:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0514, 'learning_rate': 7.0932e-06, 'epoch': 2.26} 05/11/2024 23:24:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.0868, 'learning_rate': 7.0651e-06, 'epoch': 2.26} 05/11/2024 23:24:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1540, 'learning_rate': 7.0370e-06, 'epoch': 2.27} 05/11/2024 23:25:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1350, 'learning_rate': 7.0090e-06, 'epoch': 2.27} 05/11/2024 23:25:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0843, 'learning_rate': 6.9811e-06, 'epoch': 2.27} 05/11/2024 23:25:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0135, 'learning_rate': 6.9532e-06, 'epoch': 2.27} 05/11/2024 23:25:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1527, 'learning_rate': 6.9254e-06, 'epoch': 2.27} 05/11/2024 23:25:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1415, 'learning_rate': 6.8976e-06, 'epoch': 2.27} 05/11/2024 23:25:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1465, 'learning_rate': 6.8698e-06, 'epoch': 2.28} 05/11/2024 23:26:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1570, 'learning_rate': 6.8421e-06, 'epoch': 2.28} 05/11/2024 23:26:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7400 05/11/2024 23:26:08 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:26:08 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:26:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7400/tokenizer_config.json 05/11/2024 23:26:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7400/special_tokens_map.json 05/11/2024 23:26:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1401, 'learning_rate': 6.8144e-06, 'epoch': 2.28} 05/11/2024 23:26:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1267, 'learning_rate': 6.7868e-06, 'epoch': 2.28} 05/11/2024 23:26:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1394, 'learning_rate': 6.7592e-06, 'epoch': 2.28} 05/11/2024 23:26:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1841, 'learning_rate': 6.7317e-06, 'epoch': 2.28} 05/11/2024 23:26:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0895, 'learning_rate': 6.7043e-06, 'epoch': 2.28} 05/11/2024 23:27:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1304, 'learning_rate': 6.6768e-06, 'epoch': 2.29} 05/11/2024 23:27:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.0330, 'learning_rate': 6.6495e-06, 'epoch': 2.29} 05/11/2024 23:27:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0643, 'learning_rate': 6.6221e-06, 'epoch': 2.29} 05/11/2024 23:27:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.0537, 'learning_rate': 6.5948e-06, 'epoch': 2.29} 05/11/2024 23:27:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1460, 'learning_rate': 6.5676e-06, 'epoch': 2.29} 05/11/2024 23:28:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1959, 'learning_rate': 6.5404e-06, 'epoch': 2.29} 05/11/2024 23:28:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1575, 'learning_rate': 6.5133e-06, 'epoch': 2.30} 05/11/2024 23:28:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.0784, 'learning_rate': 6.4862e-06, 'epoch': 2.30} 05/11/2024 23:28:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1037, 'learning_rate': 6.4592e-06, 'epoch': 2.30} 05/11/2024 23:28:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0831, 'learning_rate': 6.4322e-06, 'epoch': 2.30} 05/11/2024 23:28:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.2457, 'learning_rate': 6.4052e-06, 'epoch': 2.30} 05/11/2024 23:29:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2338, 'learning_rate': 6.3783e-06, 'epoch': 2.30} 05/11/2024 23:29:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1475, 'learning_rate': 6.3515e-06, 'epoch': 2.30} 05/11/2024 23:29:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2225, 'learning_rate': 6.3247e-06, 'epoch': 2.31} 05/11/2024 23:29:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0701, 'learning_rate': 6.2979e-06, 'epoch': 2.31} 05/11/2024 23:29:35 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7500 05/11/2024 23:29:36 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:29:36 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:29:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7500/tokenizer_config.json 05/11/2024 23:29:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7500/special_tokens_map.json 05/11/2024 23:29:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1267, 'learning_rate': 6.2712e-06, 'epoch': 2.31} 05/11/2024 23:29:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0784, 'learning_rate': 6.2446e-06, 'epoch': 2.31} 05/11/2024 23:30:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2017, 'learning_rate': 6.2180e-06, 'epoch': 2.31} 05/11/2024 23:30:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1995, 'learning_rate': 6.1914e-06, 'epoch': 2.31} 05/11/2024 23:30:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0586, 'learning_rate': 6.1649e-06, 'epoch': 2.32} 05/11/2024 23:30:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1833, 'learning_rate': 6.1384e-06, 'epoch': 2.32} 05/11/2024 23:30:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0803, 'learning_rate': 6.1120e-06, 'epoch': 2.32} 05/11/2024 23:31:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1680, 'learning_rate': 6.0857e-06, 'epoch': 2.32} 05/11/2024 23:31:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0812, 'learning_rate': 6.0593e-06, 'epoch': 2.32} 05/11/2024 23:31:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.0235, 'learning_rate': 6.0331e-06, 'epoch': 2.32} 05/11/2024 23:31:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0834, 'learning_rate': 6.0069e-06, 'epoch': 2.32} 05/11/2024 23:31:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0431, 'learning_rate': 5.9807e-06, 'epoch': 2.33} 05/11/2024 23:31:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1774, 'learning_rate': 5.9546e-06, 'epoch': 2.33} 05/11/2024 23:32:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2343, 'learning_rate': 5.9285e-06, 'epoch': 2.33} 05/11/2024 23:32:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0343, 'learning_rate': 5.9025e-06, 'epoch': 2.33} 05/11/2024 23:32:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.2398, 'learning_rate': 5.8765e-06, 'epoch': 2.33} 05/11/2024 23:32:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.9754, 'learning_rate': 5.8506e-06, 'epoch': 2.33} 05/11/2024 23:32:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.0585, 'learning_rate': 5.8247e-06, 'epoch': 2.34} 05/11/2024 23:32:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0558, 'learning_rate': 5.7989e-06, 'epoch': 2.34} 05/11/2024 23:33:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2275, 'learning_rate': 5.7732e-06, 'epoch': 2.34} 05/11/2024 23:33:05 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7600 05/11/2024 23:33:06 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:33:06 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:33:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7600/tokenizer_config.json 05/11/2024 23:33:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7600/special_tokens_map.json 05/11/2024 23:33:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.3907, 'learning_rate': 5.7474e-06, 'epoch': 2.34} 05/11/2024 23:33:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2227, 'learning_rate': 5.7218e-06, 'epoch': 2.34} 05/11/2024 23:33:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2214, 'learning_rate': 5.6962e-06, 'epoch': 2.34} 05/11/2024 23:33:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1048, 'learning_rate': 5.6706e-06, 'epoch': 2.34} 05/11/2024 23:33:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0361, 'learning_rate': 5.6451e-06, 'epoch': 2.35} 05/11/2024 23:34:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1273, 'learning_rate': 5.6196e-06, 'epoch': 2.35} 05/11/2024 23:34:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9944, 'learning_rate': 5.5942e-06, 'epoch': 2.35} 05/11/2024 23:34:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1266, 'learning_rate': 5.5688e-06, 'epoch': 2.35} 05/11/2024 23:34:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2082, 'learning_rate': 5.5435e-06, 'epoch': 2.35} 05/11/2024 23:34:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1022, 'learning_rate': 5.5182e-06, 'epoch': 2.35} 05/11/2024 23:34:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.2594, 'learning_rate': 5.4930e-06, 'epoch': 2.36} 05/11/2024 23:35:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.0596, 'learning_rate': 5.4679e-06, 'epoch': 2.36} 05/11/2024 23:35:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1742, 'learning_rate': 5.4427e-06, 'epoch': 2.36} 05/11/2024 23:35:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2038, 'learning_rate': 5.4177e-06, 'epoch': 2.36} 05/11/2024 23:35:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.1756, 'learning_rate': 5.3927e-06, 'epoch': 2.36} 05/11/2024 23:35:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1091, 'learning_rate': 5.3677e-06, 'epoch': 2.36} 05/11/2024 23:36:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1683, 'learning_rate': 5.3428e-06, 'epoch': 2.36} 05/11/2024 23:36:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1414, 'learning_rate': 5.3179e-06, 'epoch': 2.37} 05/11/2024 23:36:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1998, 'learning_rate': 5.2931e-06, 'epoch': 2.37} 05/11/2024 23:36:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1199, 'learning_rate': 5.2684e-06, 'epoch': 2.37} 05/11/2024 23:36:34 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7700 05/11/2024 23:36:35 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:36:35 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:36:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7700/tokenizer_config.json 05/11/2024 23:36:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7700/special_tokens_map.json 05/11/2024 23:36:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1662, 'learning_rate': 5.2437e-06, 'epoch': 2.37} 05/11/2024 23:36:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1581, 'learning_rate': 5.2190e-06, 'epoch': 2.37} 05/11/2024 23:37:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.0319, 'learning_rate': 5.1944e-06, 'epoch': 2.37} 05/11/2024 23:37:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1525, 'learning_rate': 5.1698e-06, 'epoch': 2.38} 05/11/2024 23:37:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2017, 'learning_rate': 5.1453e-06, 'epoch': 2.38} 05/11/2024 23:37:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1709, 'learning_rate': 5.1209e-06, 'epoch': 2.38} 05/11/2024 23:37:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.2094, 'learning_rate': 5.0965e-06, 'epoch': 2.38} 05/11/2024 23:38:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.0785, 'learning_rate': 5.0722e-06, 'epoch': 2.38} 05/11/2024 23:38:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2777, 'learning_rate': 5.0479e-06, 'epoch': 2.38} 05/11/2024 23:38:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2303, 'learning_rate': 5.0236e-06, 'epoch': 2.38} 05/11/2024 23:38:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0655, 'learning_rate': 4.9994e-06, 'epoch': 2.39} 05/11/2024 23:38:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2030, 'learning_rate': 4.9753e-06, 'epoch': 2.39} 05/11/2024 23:38:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1294, 'learning_rate': 4.9512e-06, 'epoch': 2.39} 05/11/2024 23:39:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1303, 'learning_rate': 4.9272e-06, 'epoch': 2.39} 05/11/2024 23:39:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2634, 'learning_rate': 4.9032e-06, 'epoch': 2.39} 05/11/2024 23:39:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0123, 'learning_rate': 4.8792e-06, 'epoch': 2.39} 05/11/2024 23:39:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1650, 'learning_rate': 4.8554e-06, 'epoch': 2.40} 05/11/2024 23:39:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1348, 'learning_rate': 4.8315e-06, 'epoch': 2.40} 05/11/2024 23:39:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0279, 'learning_rate': 4.8078e-06, 'epoch': 2.40} 05/11/2024 23:40:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2083, 'learning_rate': 4.7840e-06, 'epoch': 2.40} 05/11/2024 23:40:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7800 05/11/2024 23:40:07 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:40:07 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:40:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7800/tokenizer_config.json 05/11/2024 23:40:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7800/special_tokens_map.json 05/11/2024 23:40:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0125, 'learning_rate': 4.7604e-06, 'epoch': 2.40} 05/11/2024 23:40:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1643, 'learning_rate': 4.7368e-06, 'epoch': 2.40} 05/11/2024 23:40:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1108, 'learning_rate': 4.7132e-06, 'epoch': 2.40} 05/11/2024 23:40:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1573, 'learning_rate': 4.6897e-06, 'epoch': 2.41} 05/11/2024 23:41:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1320, 'learning_rate': 4.6662e-06, 'epoch': 2.41} 05/11/2024 23:41:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1534, 'learning_rate': 4.6428e-06, 'epoch': 2.41} 05/11/2024 23:41:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1766, 'learning_rate': 4.6195e-06, 'epoch': 2.41} 05/11/2024 23:41:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1984, 'learning_rate': 4.5962e-06, 'epoch': 2.41} 05/11/2024 23:41:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1080, 'learning_rate': 4.5729e-06, 'epoch': 2.41} 05/11/2024 23:41:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1951, 'learning_rate': 4.5497e-06, 'epoch': 2.42} 05/11/2024 23:42:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0645, 'learning_rate': 4.5266e-06, 'epoch': 2.42} 05/11/2024 23:42:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1204, 'learning_rate': 4.5035e-06, 'epoch': 2.42} 05/11/2024 23:42:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.0063, 'learning_rate': 4.4805e-06, 'epoch': 2.42} 05/11/2024 23:42:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1522, 'learning_rate': 4.4575e-06, 'epoch': 2.42} 05/11/2024 23:42:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2111, 'learning_rate': 4.4346e-06, 'epoch': 2.42} 05/11/2024 23:42:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1406, 'learning_rate': 4.4117e-06, 'epoch': 2.42} 05/11/2024 23:43:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1284, 'learning_rate': 4.3889e-06, 'epoch': 2.43} 05/11/2024 23:43:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.2728, 'learning_rate': 4.3661e-06, 'epoch': 2.43} 05/11/2024 23:43:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.2069, 'learning_rate': 4.3434e-06, 'epoch': 2.43} 05/11/2024 23:43:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.1109, 'learning_rate': 4.3207e-06, 'epoch': 2.43} 05/11/2024 23:43:39 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7900 05/11/2024 23:43:39 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:43:39 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:43:39 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7900/tokenizer_config.json 05/11/2024 23:43:39 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-7900/special_tokens_map.json 05/11/2024 23:43:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1645, 'learning_rate': 4.2981e-06, 'epoch': 2.43} 05/11/2024 23:44:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1356, 'learning_rate': 4.2756e-06, 'epoch': 2.43} 05/11/2024 23:44:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1087, 'learning_rate': 4.2531e-06, 'epoch': 2.44} 05/11/2024 23:44:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0055, 'learning_rate': 4.2306e-06, 'epoch': 2.44} 05/11/2024 23:44:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1642, 'learning_rate': 4.2082e-06, 'epoch': 2.44} 05/11/2024 23:44:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.9859, 'learning_rate': 4.1859e-06, 'epoch': 2.44} 05/11/2024 23:44:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1014, 'learning_rate': 4.1636e-06, 'epoch': 2.44} 05/11/2024 23:45:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1734, 'learning_rate': 4.1414e-06, 'epoch': 2.44} 05/11/2024 23:45:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0406, 'learning_rate': 4.1192e-06, 'epoch': 2.44} 05/11/2024 23:45:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1752, 'learning_rate': 4.0971e-06, 'epoch': 2.45} 05/11/2024 23:45:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1408, 'learning_rate': 4.0750e-06, 'epoch': 2.45} 05/11/2024 23:45:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1057, 'learning_rate': 4.0530e-06, 'epoch': 2.45} 05/11/2024 23:45:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1457, 'learning_rate': 4.0310e-06, 'epoch': 2.45} 05/11/2024 23:46:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.2844, 'learning_rate': 4.0091e-06, 'epoch': 2.45} 05/11/2024 23:46:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1268, 'learning_rate': 3.9873e-06, 'epoch': 2.45} 05/11/2024 23:46:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1367, 'learning_rate': 3.9655e-06, 'epoch': 2.46} 05/11/2024 23:46:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1304, 'learning_rate': 3.9438e-06, 'epoch': 2.46} 05/11/2024 23:46:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1083, 'learning_rate': 3.9221e-06, 'epoch': 2.46} 05/11/2024 23:46:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1529, 'learning_rate': 3.9004e-06, 'epoch': 2.46} 05/11/2024 23:47:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1808, 'learning_rate': 3.8789e-06, 'epoch': 2.46} 05/11/2024 23:47:07 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8000 05/11/2024 23:47:08 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:47:08 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:47:08 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8000/tokenizer_config.json 05/11/2024 23:47:08 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8000/special_tokens_map.json 05/11/2024 23:47:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1583, 'learning_rate': 3.8573e-06, 'epoch': 2.46} 05/11/2024 23:47:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0924, 'learning_rate': 3.8359e-06, 'epoch': 2.46} 05/11/2024 23:47:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1786, 'learning_rate': 3.8145e-06, 'epoch': 2.47} 05/11/2024 23:47:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0253, 'learning_rate': 3.7931e-06, 'epoch': 2.47} 05/11/2024 23:48:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2051, 'learning_rate': 3.7718e-06, 'epoch': 2.47} 05/11/2024 23:48:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1933, 'learning_rate': 3.7506e-06, 'epoch': 2.47} 05/11/2024 23:48:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0577, 'learning_rate': 3.7294e-06, 'epoch': 2.47} 05/11/2024 23:48:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1723, 'learning_rate': 3.7082e-06, 'epoch': 2.47} 05/11/2024 23:48:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.3035, 'learning_rate': 3.6872e-06, 'epoch': 2.48} 05/11/2024 23:48:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.2075, 'learning_rate': 3.6661e-06, 'epoch': 2.48} 05/11/2024 23:49:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0978, 'learning_rate': 3.6452e-06, 'epoch': 2.48} 05/11/2024 23:49:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1551, 'learning_rate': 3.6242e-06, 'epoch': 2.48} 05/11/2024 23:49:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.0623, 'learning_rate': 3.6034e-06, 'epoch': 2.48} 05/11/2024 23:49:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1460, 'learning_rate': 3.5826e-06, 'epoch': 2.48} 05/11/2024 23:49:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1969, 'learning_rate': 3.5618e-06, 'epoch': 2.48} 05/11/2024 23:49:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.9617, 'learning_rate': 3.5411e-06, 'epoch': 2.49} 05/11/2024 23:50:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.0424, 'learning_rate': 3.5205e-06, 'epoch': 2.49} 05/11/2024 23:50:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0828, 'learning_rate': 3.4999e-06, 'epoch': 2.49} 05/11/2024 23:50:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.2526, 'learning_rate': 3.4794e-06, 'epoch': 2.49} 05/11/2024 23:50:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0448, 'learning_rate': 3.4589e-06, 'epoch': 2.49} 05/11/2024 23:50:40 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8100 05/11/2024 23:50:41 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:50:41 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:50:41 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8100/tokenizer_config.json 05/11/2024 23:50:41 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8100/special_tokens_map.json 05/11/2024 23:50:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1272, 'learning_rate': 3.4385e-06, 'epoch': 2.49} 05/11/2024 23:51:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0827, 'learning_rate': 3.4182e-06, 'epoch': 2.50} 05/11/2024 23:51:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0358, 'learning_rate': 3.3979e-06, 'epoch': 2.50} 05/11/2024 23:51:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1586, 'learning_rate': 3.3776e-06, 'epoch': 2.50} 05/11/2024 23:51:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0912, 'learning_rate': 3.3574e-06, 'epoch': 2.50} 05/11/2024 23:51:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.2092, 'learning_rate': 3.3373e-06, 'epoch': 2.50} 05/11/2024 23:51:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0629, 'learning_rate': 3.3172e-06, 'epoch': 2.50} 05/11/2024 23:52:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1027, 'learning_rate': 3.2972e-06, 'epoch': 2.50} 05/11/2024 23:52:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.2366, 'learning_rate': 3.2772e-06, 'epoch': 2.51} 05/11/2024 23:52:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1483, 'learning_rate': 3.2573e-06, 'epoch': 2.51} 05/11/2024 23:52:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0985, 'learning_rate': 3.2375e-06, 'epoch': 2.51} 05/11/2024 23:52:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0903, 'learning_rate': 3.2177e-06, 'epoch': 2.51} 05/11/2024 23:53:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1369, 'learning_rate': 3.1979e-06, 'epoch': 2.51} 05/11/2024 23:53:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.2038, 'learning_rate': 3.1783e-06, 'epoch': 2.51} 05/11/2024 23:53:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1077, 'learning_rate': 3.1586e-06, 'epoch': 2.52} 05/11/2024 23:53:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0097, 'learning_rate': 3.1391e-06, 'epoch': 2.52} 05/11/2024 23:53:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0125, 'learning_rate': 3.1196e-06, 'epoch': 2.52} 05/11/2024 23:53:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0558, 'learning_rate': 3.1001e-06, 'epoch': 2.52} 05/11/2024 23:54:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.1562, 'learning_rate': 3.0807e-06, 'epoch': 2.52} 05/11/2024 23:54:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1018, 'learning_rate': 3.0614e-06, 'epoch': 2.52} 05/11/2024 23:54:12 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8200 05/11/2024 23:54:13 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:54:13 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:54:13 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8200/tokenizer_config.json 05/11/2024 23:54:13 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8200/special_tokens_map.json 05/11/2024 23:54:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0944, 'learning_rate': 3.0421e-06, 'epoch': 2.52} 05/11/2024 23:54:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1025, 'learning_rate': 3.0228e-06, 'epoch': 2.53} 05/11/2024 23:54:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.1262, 'learning_rate': 3.0037e-06, 'epoch': 2.53} 05/11/2024 23:54:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1311, 'learning_rate': 2.9846e-06, 'epoch': 2.53} 05/11/2024 23:55:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1261, 'learning_rate': 2.9655e-06, 'epoch': 2.53} 05/11/2024 23:55:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2294, 'learning_rate': 2.9465e-06, 'epoch': 2.53} 05/11/2024 23:55:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1547, 'learning_rate': 2.9276e-06, 'epoch': 2.53} 05/11/2024 23:55:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0528, 'learning_rate': 2.9087e-06, 'epoch': 2.54} 05/11/2024 23:55:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2613, 'learning_rate': 2.8899e-06, 'epoch': 2.54} 05/11/2024 23:55:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0307, 'learning_rate': 2.8711e-06, 'epoch': 2.54} 05/11/2024 23:56:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2170, 'learning_rate': 2.8524e-06, 'epoch': 2.54} 05/11/2024 23:56:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.0502, 'learning_rate': 2.8337e-06, 'epoch': 2.54} 05/11/2024 23:56:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.3229, 'learning_rate': 2.8151e-06, 'epoch': 2.54} 05/11/2024 23:56:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1239, 'learning_rate': 2.7966e-06, 'epoch': 2.54} 05/11/2024 23:56:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0630, 'learning_rate': 2.7781e-06, 'epoch': 2.55} 05/11/2024 23:56:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1397, 'learning_rate': 2.7597e-06, 'epoch': 2.55} 05/11/2024 23:57:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1236, 'learning_rate': 2.7413e-06, 'epoch': 2.55} 05/11/2024 23:57:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0664, 'learning_rate': 2.7230e-06, 'epoch': 2.55} 05/11/2024 23:57:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1093, 'learning_rate': 2.7048e-06, 'epoch': 2.55} 05/11/2024 23:57:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1725, 'learning_rate': 2.6866e-06, 'epoch': 2.55} 05/11/2024 23:57:36 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8300 05/11/2024 23:57:36 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/11/2024 23:57:36 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/11/2024 23:57:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8300/tokenizer_config.json 05/11/2024 23:57:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8300/special_tokens_map.json 05/11/2024 23:57:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0883, 'learning_rate': 2.6684e-06, 'epoch': 2.56} 05/11/2024 23:57:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1141, 'learning_rate': 2.6504e-06, 'epoch': 2.56} 05/11/2024 23:58:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.1521, 'learning_rate': 2.6323e-06, 'epoch': 2.56} 05/11/2024 23:58:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0427, 'learning_rate': 2.6144e-06, 'epoch': 2.56} 05/11/2024 23:58:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1774, 'learning_rate': 2.5965e-06, 'epoch': 2.56} 05/11/2024 23:58:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1262, 'learning_rate': 2.5786e-06, 'epoch': 2.56} 05/11/2024 23:58:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2676, 'learning_rate': 2.5608e-06, 'epoch': 2.56} 05/11/2024 23:58:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0543, 'learning_rate': 2.5431e-06, 'epoch': 2.57} 05/11/2024 23:59:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1211, 'learning_rate': 2.5254e-06, 'epoch': 2.57} 05/11/2024 23:59:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1792, 'learning_rate': 2.5078e-06, 'epoch': 2.57} 05/11/2024 23:59:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1749, 'learning_rate': 2.4903e-06, 'epoch': 2.57} 05/11/2024 23:59:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0790, 'learning_rate': 2.4728e-06, 'epoch': 2.57} 05/11/2024 23:59:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1926, 'learning_rate': 2.4553e-06, 'epoch': 2.57} 05/12/2024 00:00:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1329, 'learning_rate': 2.4380e-06, 'epoch': 2.58} 05/12/2024 00:00:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0945, 'learning_rate': 2.4207e-06, 'epoch': 2.58} 05/12/2024 00:00:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.2291, 'learning_rate': 2.4034e-06, 'epoch': 2.58} 05/12/2024 00:00:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1130, 'learning_rate': 2.3862e-06, 'epoch': 2.58} 05/12/2024 00:00:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.2072, 'learning_rate': 2.3690e-06, 'epoch': 2.58} 05/12/2024 00:00:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0920, 'learning_rate': 2.3520e-06, 'epoch': 2.58} 05/12/2024 00:01:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2229, 'learning_rate': 2.3349e-06, 'epoch': 2.58} 05/12/2024 00:01:05 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8400 05/12/2024 00:01:05 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:01:05 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:01:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8400/tokenizer_config.json 05/12/2024 00:01:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8400/special_tokens_map.json 05/12/2024 00:01:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0366, 'learning_rate': 2.3180e-06, 'epoch': 2.59} 05/12/2024 00:01:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.1082, 'learning_rate': 2.3011e-06, 'epoch': 2.59} 05/12/2024 00:01:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.1882, 'learning_rate': 2.2842e-06, 'epoch': 2.59} 05/12/2024 00:01:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1233, 'learning_rate': 2.2674e-06, 'epoch': 2.59} 05/12/2024 00:01:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.2028, 'learning_rate': 2.2507e-06, 'epoch': 2.59} 05/12/2024 00:02:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.2098, 'learning_rate': 2.2340e-06, 'epoch': 2.59} 05/12/2024 00:02:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0950, 'learning_rate': 2.2174e-06, 'epoch': 2.60} 05/12/2024 00:02:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2627, 'learning_rate': 2.2009e-06, 'epoch': 2.60} 05/12/2024 00:02:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.0631, 'learning_rate': 2.1844e-06, 'epoch': 2.60} 05/12/2024 00:02:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0974, 'learning_rate': 2.1679e-06, 'epoch': 2.60} 05/12/2024 00:02:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1896, 'learning_rate': 2.1515e-06, 'epoch': 2.60} 05/12/2024 00:03:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2268, 'learning_rate': 2.1352e-06, 'epoch': 2.60} 05/12/2024 00:03:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1769, 'learning_rate': 2.1190e-06, 'epoch': 2.60} 05/12/2024 00:03:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1078, 'learning_rate': 2.1028e-06, 'epoch': 2.61} 05/12/2024 00:03:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2161, 'learning_rate': 2.0866e-06, 'epoch': 2.61} 05/12/2024 00:03:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1708, 'learning_rate': 2.0706e-06, 'epoch': 2.61} 05/12/2024 00:03:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1895, 'learning_rate': 2.0545e-06, 'epoch': 2.61} 05/12/2024 00:04:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.2637, 'learning_rate': 2.0386e-06, 'epoch': 2.61} 05/12/2024 00:04:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1513, 'learning_rate': 2.0227e-06, 'epoch': 2.61} 05/12/2024 00:04:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0883, 'learning_rate': 2.0068e-06, 'epoch': 2.62} 05/12/2024 00:04:32 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8500 05/12/2024 00:04:33 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:04:33 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:04:33 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8500/tokenizer_config.json 05/12/2024 00:04:33 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8500/special_tokens_map.json 05/12/2024 00:04:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.0835, 'learning_rate': 1.9911e-06, 'epoch': 2.62} 05/12/2024 00:04:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.2002, 'learning_rate': 1.9753e-06, 'epoch': 2.62} 05/12/2024 00:05:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2196, 'learning_rate': 1.9597e-06, 'epoch': 2.62} 05/12/2024 00:05:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0791, 'learning_rate': 1.9441e-06, 'epoch': 2.62} 05/12/2024 00:05:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2197, 'learning_rate': 1.9285e-06, 'epoch': 2.62} 05/12/2024 00:05:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0815, 'learning_rate': 1.9130e-06, 'epoch': 2.62} 05/12/2024 00:05:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1042, 'learning_rate': 1.8976e-06, 'epoch': 2.63} 05/12/2024 00:05:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0295, 'learning_rate': 1.8823e-06, 'epoch': 2.63} 05/12/2024 00:06:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1148, 'learning_rate': 1.8670e-06, 'epoch': 2.63} 05/12/2024 00:06:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1129, 'learning_rate': 1.8517e-06, 'epoch': 2.63} 05/12/2024 00:06:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1038, 'learning_rate': 1.8365e-06, 'epoch': 2.63} 05/12/2024 00:06:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.2579, 'learning_rate': 1.8214e-06, 'epoch': 2.63} 05/12/2024 00:06:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.2608, 'learning_rate': 1.8063e-06, 'epoch': 2.64} 05/12/2024 00:06:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.2120, 'learning_rate': 1.7913e-06, 'epoch': 2.64} 05/12/2024 00:07:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1288, 'learning_rate': 1.7764e-06, 'epoch': 2.64} 05/12/2024 00:07:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1998, 'learning_rate': 1.7615e-06, 'epoch': 2.64} 05/12/2024 00:07:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1055, 'learning_rate': 1.7467e-06, 'epoch': 2.64} 05/12/2024 00:07:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1111, 'learning_rate': 1.7319e-06, 'epoch': 2.64} 05/12/2024 00:07:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1661, 'learning_rate': 1.7172e-06, 'epoch': 2.64} 05/12/2024 00:07:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.9952, 'learning_rate': 1.7026e-06, 'epoch': 2.65} 05/12/2024 00:07:57 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8600 05/12/2024 00:07:58 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:07:58 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:07:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8600/tokenizer_config.json 05/12/2024 00:07:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8600/special_tokens_map.json 05/12/2024 00:08:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.2905, 'learning_rate': 1.6880e-06, 'epoch': 2.65} 05/12/2024 00:08:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.0405, 'learning_rate': 1.6735e-06, 'epoch': 2.65} 05/12/2024 00:08:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1010, 'learning_rate': 1.6590e-06, 'epoch': 2.65} 05/12/2024 00:08:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.0497, 'learning_rate': 1.6446e-06, 'epoch': 2.65} 05/12/2024 00:08:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2324, 'learning_rate': 1.6303e-06, 'epoch': 2.65} 05/12/2024 00:08:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1059, 'learning_rate': 1.6160e-06, 'epoch': 2.66} 05/12/2024 00:09:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1365, 'learning_rate': 1.6018e-06, 'epoch': 2.66} 05/12/2024 00:09:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1834, 'learning_rate': 1.5877e-06, 'epoch': 2.66} 05/12/2024 00:09:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1222, 'learning_rate': 1.5736e-06, 'epoch': 2.66} 05/12/2024 00:09:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0845, 'learning_rate': 1.5595e-06, 'epoch': 2.66} 05/12/2024 00:09:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1634, 'learning_rate': 1.5456e-06, 'epoch': 2.66} 05/12/2024 00:10:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.2237, 'learning_rate': 1.5317e-06, 'epoch': 2.66} 05/12/2024 00:10:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1956, 'learning_rate': 1.5178e-06, 'epoch': 2.67} 05/12/2024 00:10:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1830, 'learning_rate': 1.5040e-06, 'epoch': 2.67} 05/12/2024 00:10:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0752, 'learning_rate': 1.4903e-06, 'epoch': 2.67} 05/12/2024 00:10:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1313, 'learning_rate': 1.4766e-06, 'epoch': 2.67} 05/12/2024 00:10:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.9894, 'learning_rate': 1.4630e-06, 'epoch': 2.67} 05/12/2024 00:11:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1927, 'learning_rate': 1.4495e-06, 'epoch': 2.67} 05/12/2024 00:11:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.2060, 'learning_rate': 1.4360e-06, 'epoch': 2.68} 05/12/2024 00:11:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1056, 'learning_rate': 1.4226e-06, 'epoch': 2.68} 05/12/2024 00:11:25 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8700 05/12/2024 00:11:26 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:11:26 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:11:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8700/tokenizer_config.json 05/12/2024 00:11:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8700/special_tokens_map.json 05/12/2024 00:11:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1469, 'learning_rate': 1.4092e-06, 'epoch': 2.68} 05/12/2024 00:11:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.0380, 'learning_rate': 1.3959e-06, 'epoch': 2.68} 05/12/2024 00:11:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1278, 'learning_rate': 1.3827e-06, 'epoch': 2.68} 05/12/2024 00:12:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1225, 'learning_rate': 1.3695e-06, 'epoch': 2.68} 05/12/2024 00:12:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1967, 'learning_rate': 1.3564e-06, 'epoch': 2.68} 05/12/2024 00:12:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1659, 'learning_rate': 1.3433e-06, 'epoch': 2.69} 05/12/2024 00:12:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2293, 'learning_rate': 1.3303e-06, 'epoch': 2.69} 05/12/2024 00:12:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0543, 'learning_rate': 1.3174e-06, 'epoch': 2.69} 05/12/2024 00:13:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0793, 'learning_rate': 1.3045e-06, 'epoch': 2.69} 05/12/2024 00:13:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1523, 'learning_rate': 1.2917e-06, 'epoch': 2.69} 05/12/2024 00:13:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0986, 'learning_rate': 1.2789e-06, 'epoch': 2.69} 05/12/2024 00:13:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0889, 'learning_rate': 1.2663e-06, 'epoch': 2.70} 05/12/2024 00:13:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1933, 'learning_rate': 1.2536e-06, 'epoch': 2.70} 05/12/2024 00:13:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1819, 'learning_rate': 1.2411e-06, 'epoch': 2.70} 05/12/2024 00:14:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2118, 'learning_rate': 1.2286e-06, 'epoch': 2.70} 05/12/2024 00:14:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.2306, 'learning_rate': 1.2161e-06, 'epoch': 2.70} 05/12/2024 00:14:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2009, 'learning_rate': 1.2038e-06, 'epoch': 2.70} 05/12/2024 00:14:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.0794, 'learning_rate': 1.1914e-06, 'epoch': 2.70} 05/12/2024 00:14:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1427, 'learning_rate': 1.1792e-06, 'epoch': 2.71} 05/12/2024 00:14:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1230, 'learning_rate': 1.1670e-06, 'epoch': 2.71} 05/12/2024 00:14:56 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8800 05/12/2024 00:14:57 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:14:57 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:14:57 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8800/tokenizer_config.json 05/12/2024 00:14:57 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8800/special_tokens_map.json 05/12/2024 00:15:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1457, 'learning_rate': 1.1549e-06, 'epoch': 2.71} 05/12/2024 00:15:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1163, 'learning_rate': 1.1428e-06, 'epoch': 2.71} 05/12/2024 00:15:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.0995, 'learning_rate': 1.1308e-06, 'epoch': 2.71} 05/12/2024 00:15:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1652, 'learning_rate': 1.1188e-06, 'epoch': 2.71} 05/12/2024 00:15:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1864, 'learning_rate': 1.1070e-06, 'epoch': 2.72} 05/12/2024 00:15:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0647, 'learning_rate': 1.0951e-06, 'epoch': 2.72} 05/12/2024 00:16:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.2105, 'learning_rate': 1.0834e-06, 'epoch': 2.72} 05/12/2024 00:16:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.1670, 'learning_rate': 1.0717e-06, 'epoch': 2.72} 05/12/2024 00:16:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1643, 'learning_rate': 1.0600e-06, 'epoch': 2.72} 05/12/2024 00:16:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2436, 'learning_rate': 1.0485e-06, 'epoch': 2.72} 05/12/2024 00:16:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0879, 'learning_rate': 1.0370e-06, 'epoch': 2.72} 05/12/2024 00:17:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.2060, 'learning_rate': 1.0255e-06, 'epoch': 2.73} 05/12/2024 00:17:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2533, 'learning_rate': 1.0141e-06, 'epoch': 2.73} 05/12/2024 00:17:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1394, 'learning_rate': 1.0028e-06, 'epoch': 2.73} 05/12/2024 00:17:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1713, 'learning_rate': 9.9153e-07, 'epoch': 2.73} 05/12/2024 00:17:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1108, 'learning_rate': 9.8033e-07, 'epoch': 2.73} 05/12/2024 00:17:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1910, 'learning_rate': 9.6920e-07, 'epoch': 2.73} 05/12/2024 00:18:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9247, 'learning_rate': 9.5812e-07, 'epoch': 2.74} 05/12/2024 00:18:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0897, 'learning_rate': 9.4711e-07, 'epoch': 2.74} 05/12/2024 00:18:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1201, 'learning_rate': 9.3616e-07, 'epoch': 2.74} 05/12/2024 00:18:25 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8900 05/12/2024 00:18:25 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:18:25 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:18:25 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8900/tokenizer_config.json 05/12/2024 00:18:25 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-8900/special_tokens_map.json 05/12/2024 00:18:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2296, 'learning_rate': 9.2527e-07, 'epoch': 2.74} 05/12/2024 00:18:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.1622, 'learning_rate': 9.1445e-07, 'epoch': 2.74} 05/12/2024 00:18:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1528, 'learning_rate': 9.0369e-07, 'epoch': 2.74} 05/12/2024 00:19:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1723, 'learning_rate': 8.9299e-07, 'epoch': 2.74} 05/12/2024 00:19:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.1311, 'learning_rate': 8.8235e-07, 'epoch': 2.75} 05/12/2024 00:19:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2323, 'learning_rate': 8.7177e-07, 'epoch': 2.75} 05/12/2024 00:19:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1243, 'learning_rate': 8.6126e-07, 'epoch': 2.75} 05/12/2024 00:19:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0926, 'learning_rate': 8.5081e-07, 'epoch': 2.75} 05/12/2024 00:19:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1677, 'learning_rate': 8.4043e-07, 'epoch': 2.75} 05/12/2024 00:20:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1723, 'learning_rate': 8.3010e-07, 'epoch': 2.75} 05/12/2024 00:20:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1830, 'learning_rate': 8.1984e-07, 'epoch': 2.76} 05/12/2024 00:20:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0918, 'learning_rate': 8.0964e-07, 'epoch': 2.76} 05/12/2024 00:20:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2304, 'learning_rate': 7.9951e-07, 'epoch': 2.76} 05/12/2024 00:20:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1795, 'learning_rate': 7.8943e-07, 'epoch': 2.76} 05/12/2024 00:21:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.0852, 'learning_rate': 7.7942e-07, 'epoch': 2.76} 05/12/2024 00:21:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1222, 'learning_rate': 7.6948e-07, 'epoch': 2.76} 05/12/2024 00:21:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.0621, 'learning_rate': 7.5959e-07, 'epoch': 2.76} 05/12/2024 00:21:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2151, 'learning_rate': 7.4977e-07, 'epoch': 2.77} 05/12/2024 00:21:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0939, 'learning_rate': 7.4001e-07, 'epoch': 2.77} 05/12/2024 00:21:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1962, 'learning_rate': 7.3032e-07, 'epoch': 2.77} 05/12/2024 00:21:51 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9000 05/12/2024 00:21:52 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:21:52 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:21:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9000/tokenizer_config.json 05/12/2024 00:21:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9000/special_tokens_map.json 05/12/2024 00:22:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.1294, 'learning_rate': 7.2068e-07, 'epoch': 2.77} 05/12/2024 00:22:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1511, 'learning_rate': 7.1111e-07, 'epoch': 2.77} 05/12/2024 00:22:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0526, 'learning_rate': 7.0161e-07, 'epoch': 2.77} 05/12/2024 00:22:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0708, 'learning_rate': 6.9216e-07, 'epoch': 2.78} 05/12/2024 00:22:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0874, 'learning_rate': 6.8278e-07, 'epoch': 2.78} 05/12/2024 00:22:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1287, 'learning_rate': 6.7347e-07, 'epoch': 2.78} 05/12/2024 00:23:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.2326, 'learning_rate': 6.6421e-07, 'epoch': 2.78} 05/12/2024 00:23:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1212, 'learning_rate': 6.5502e-07, 'epoch': 2.78} 05/12/2024 00:23:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.2275, 'learning_rate': 6.4589e-07, 'epoch': 2.78} 05/12/2024 00:23:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1677, 'learning_rate': 6.3683e-07, 'epoch': 2.78} 05/12/2024 00:23:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0570, 'learning_rate': 6.2783e-07, 'epoch': 2.79} 05/12/2024 00:23:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1660, 'learning_rate': 6.1889e-07, 'epoch': 2.79} 05/12/2024 00:24:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1660, 'learning_rate': 6.1001e-07, 'epoch': 2.79} 05/12/2024 00:24:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.2142, 'learning_rate': 6.0120e-07, 'epoch': 2.79} 05/12/2024 00:24:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.2034, 'learning_rate': 5.9245e-07, 'epoch': 2.79} 05/12/2024 00:24:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.0679, 'learning_rate': 5.8377e-07, 'epoch': 2.79} 05/12/2024 00:24:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.2121, 'learning_rate': 5.7515e-07, 'epoch': 2.80} 05/12/2024 00:25:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.0761, 'learning_rate': 5.6659e-07, 'epoch': 2.80} 05/12/2024 00:25:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.1133, 'learning_rate': 5.5810e-07, 'epoch': 2.80} 05/12/2024 00:25:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0562, 'learning_rate': 5.4966e-07, 'epoch': 2.80} 05/12/2024 00:25:23 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9100 05/12/2024 00:25:24 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:25:24 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:25:24 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9100/tokenizer_config.json 05/12/2024 00:25:24 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9100/special_tokens_map.json 05/12/2024 00:25:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1331, 'learning_rate': 5.4130e-07, 'epoch': 2.80} 05/12/2024 00:25:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1539, 'learning_rate': 5.3299e-07, 'epoch': 2.80} 05/12/2024 00:25:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.1187, 'learning_rate': 5.2475e-07, 'epoch': 2.80} 05/12/2024 00:26:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1518, 'learning_rate': 5.1657e-07, 'epoch': 2.81} 05/12/2024 00:26:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.0955, 'learning_rate': 5.0846e-07, 'epoch': 2.81} 05/12/2024 00:26:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1842, 'learning_rate': 5.0041e-07, 'epoch': 2.81} 05/12/2024 00:26:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1416, 'learning_rate': 4.9242e-07, 'epoch': 2.81} 05/12/2024 00:26:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1191, 'learning_rate': 4.8450e-07, 'epoch': 2.81} 05/12/2024 00:26:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0210, 'learning_rate': 4.7664e-07, 'epoch': 2.81} 05/12/2024 00:27:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0401, 'learning_rate': 4.6885e-07, 'epoch': 2.82} 05/12/2024 00:27:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1469, 'learning_rate': 4.6112e-07, 'epoch': 2.82} 05/12/2024 00:27:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1372, 'learning_rate': 4.5345e-07, 'epoch': 2.82} 05/12/2024 00:27:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.1779, 'learning_rate': 4.4584e-07, 'epoch': 2.82} 05/12/2024 00:27:44 - INFO - llmtuner.extras.callbacks - {'loss': 1.1100, 'learning_rate': 4.3830e-07, 'epoch': 2.82} 05/12/2024 00:27:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.0576, 'learning_rate': 4.3082e-07, 'epoch': 2.82} 05/12/2024 00:28:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1202, 'learning_rate': 4.2341e-07, 'epoch': 2.82} 05/12/2024 00:28:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.2508, 'learning_rate': 4.1606e-07, 'epoch': 2.83} 05/12/2024 00:28:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1784, 'learning_rate': 4.0878e-07, 'epoch': 2.83} 05/12/2024 00:28:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.2262, 'learning_rate': 4.0155e-07, 'epoch': 2.83} 05/12/2024 00:28:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.1860, 'learning_rate': 3.9440e-07, 'epoch': 2.83} 05/12/2024 00:28:45 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9200 05/12/2024 00:28:46 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:28:46 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:28:46 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9200/tokenizer_config.json 05/12/2024 00:28:46 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9200/special_tokens_map.json 05/12/2024 00:28:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1431, 'learning_rate': 3.8730e-07, 'epoch': 2.83} 05/12/2024 00:29:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.0308, 'learning_rate': 3.8027e-07, 'epoch': 2.83} 05/12/2024 00:29:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0939, 'learning_rate': 3.7331e-07, 'epoch': 2.84} 05/12/2024 00:29:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1051, 'learning_rate': 3.6640e-07, 'epoch': 2.84} 05/12/2024 00:29:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.0794, 'learning_rate': 3.5957e-07, 'epoch': 2.84} 05/12/2024 00:29:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.2363, 'learning_rate': 3.5279e-07, 'epoch': 2.84} 05/12/2024 00:29:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0583, 'learning_rate': 3.4608e-07, 'epoch': 2.84} 05/12/2024 00:30:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.2122, 'learning_rate': 3.3943e-07, 'epoch': 2.84} 05/12/2024 00:30:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1871, 'learning_rate': 3.3285e-07, 'epoch': 2.84} 05/12/2024 00:30:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0696, 'learning_rate': 3.2633e-07, 'epoch': 2.85} 05/12/2024 00:30:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0279, 'learning_rate': 3.1988e-07, 'epoch': 2.85} 05/12/2024 00:30:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1212, 'learning_rate': 3.1349e-07, 'epoch': 2.85} 05/12/2024 00:31:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.1438, 'learning_rate': 3.0716e-07, 'epoch': 2.85} 05/12/2024 00:31:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.2395, 'learning_rate': 3.0090e-07, 'epoch': 2.85} 05/12/2024 00:31:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0578, 'learning_rate': 2.9470e-07, 'epoch': 2.85} 05/12/2024 00:31:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1106, 'learning_rate': 2.8857e-07, 'epoch': 2.86} 05/12/2024 00:31:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0286, 'learning_rate': 2.8250e-07, 'epoch': 2.86} 05/12/2024 00:31:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1635, 'learning_rate': 2.7649e-07, 'epoch': 2.86} 05/12/2024 00:32:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1557, 'learning_rate': 2.7055e-07, 'epoch': 2.86} 05/12/2024 00:32:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.0724, 'learning_rate': 2.6467e-07, 'epoch': 2.86} 05/12/2024 00:32:12 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9300 05/12/2024 00:32:12 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:32:12 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:32:12 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9300/tokenizer_config.json 05/12/2024 00:32:12 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9300/special_tokens_map.json 05/12/2024 00:32:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1252, 'learning_rate': 2.5886e-07, 'epoch': 2.86} 05/12/2024 00:32:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.1687, 'learning_rate': 2.5311e-07, 'epoch': 2.86} 05/12/2024 00:32:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0814, 'learning_rate': 2.4743e-07, 'epoch': 2.87} 05/12/2024 00:32:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0850, 'learning_rate': 2.4181e-07, 'epoch': 2.87} 05/12/2024 00:33:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0728, 'learning_rate': 2.3625e-07, 'epoch': 2.87} 05/12/2024 00:33:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1025, 'learning_rate': 2.3076e-07, 'epoch': 2.87} 05/12/2024 00:33:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2473, 'learning_rate': 2.2533e-07, 'epoch': 2.87} 05/12/2024 00:33:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.1593, 'learning_rate': 2.1997e-07, 'epoch': 2.87} 05/12/2024 00:33:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.2071, 'learning_rate': 2.1467e-07, 'epoch': 2.88} 05/12/2024 00:33:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.1790, 'learning_rate': 2.0943e-07, 'epoch': 2.88} 05/12/2024 00:34:08 - INFO - llmtuner.extras.callbacks - {'loss': 1.1551, 'learning_rate': 2.0426e-07, 'epoch': 2.88} 05/12/2024 00:34:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9862, 'learning_rate': 1.9916e-07, 'epoch': 2.88} 05/12/2024 00:34:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.1702, 'learning_rate': 1.9412e-07, 'epoch': 2.88} 05/12/2024 00:34:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1837, 'learning_rate': 1.8914e-07, 'epoch': 2.88} 05/12/2024 00:34:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.1530, 'learning_rate': 1.8423e-07, 'epoch': 2.88} 05/12/2024 00:35:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1127, 'learning_rate': 1.7938e-07, 'epoch': 2.89} 05/12/2024 00:35:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1165, 'learning_rate': 1.7459e-07, 'epoch': 2.89} 05/12/2024 00:35:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.2549, 'learning_rate': 1.6987e-07, 'epoch': 2.89} 05/12/2024 00:35:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.1517, 'learning_rate': 1.6522e-07, 'epoch': 2.89} 05/12/2024 00:35:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.0008, 'learning_rate': 1.6063e-07, 'epoch': 2.89} 05/12/2024 00:35:43 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9400 05/12/2024 00:35:43 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:35:43 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:35:43 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9400/tokenizer_config.json 05/12/2024 00:35:43 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9400/special_tokens_map.json 05/12/2024 00:35:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0936, 'learning_rate': 1.5610e-07, 'epoch': 2.89} 05/12/2024 00:36:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1710, 'learning_rate': 1.5164e-07, 'epoch': 2.90} 05/12/2024 00:36:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1666, 'learning_rate': 1.4724e-07, 'epoch': 2.90} 05/12/2024 00:36:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1640, 'learning_rate': 1.4291e-07, 'epoch': 2.90} 05/12/2024 00:36:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.2611, 'learning_rate': 1.3864e-07, 'epoch': 2.90} 05/12/2024 00:36:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.1891, 'learning_rate': 1.3444e-07, 'epoch': 2.90} 05/12/2024 00:36:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1009, 'learning_rate': 1.3030e-07, 'epoch': 2.90} 05/12/2024 00:37:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1941, 'learning_rate': 1.2622e-07, 'epoch': 2.90} 05/12/2024 00:37:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.1932, 'learning_rate': 1.2221e-07, 'epoch': 2.91} 05/12/2024 00:37:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.2584, 'learning_rate': 1.1827e-07, 'epoch': 2.91} 05/12/2024 00:37:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1569, 'learning_rate': 1.1439e-07, 'epoch': 2.91} 05/12/2024 00:37:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1326, 'learning_rate': 1.1057e-07, 'epoch': 2.91} 05/12/2024 00:37:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.0110, 'learning_rate': 1.0682e-07, 'epoch': 2.91} 05/12/2024 00:38:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1797, 'learning_rate': 1.0313e-07, 'epoch': 2.91} 05/12/2024 00:38:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.1561, 'learning_rate': 9.9511e-08, 'epoch': 2.92} 05/12/2024 00:38:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.0840, 'learning_rate': 9.5953e-08, 'epoch': 2.92} 05/12/2024 00:38:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1090, 'learning_rate': 9.2460e-08, 'epoch': 2.92} 05/12/2024 00:38:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0309, 'learning_rate': 8.9032e-08, 'epoch': 2.92} 05/12/2024 00:39:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.1081, 'learning_rate': 8.5668e-08, 'epoch': 2.92} 05/12/2024 00:39:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1672, 'learning_rate': 8.2369e-08, 'epoch': 2.92} 05/12/2024 00:39:11 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9500 05/12/2024 00:39:12 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:39:12 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:39:12 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9500/tokenizer_config.json 05/12/2024 00:39:12 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9500/special_tokens_map.json 05/12/2024 00:39:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.0484, 'learning_rate': 7.9134e-08, 'epoch': 2.92} 05/12/2024 00:39:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.2351, 'learning_rate': 7.5965e-08, 'epoch': 2.93} 05/12/2024 00:39:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.2732, 'learning_rate': 7.2859e-08, 'epoch': 2.93} 05/12/2024 00:39:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.1274, 'learning_rate': 6.9819e-08, 'epoch': 2.93} 05/12/2024 00:40:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0230, 'learning_rate': 6.6843e-08, 'epoch': 2.93} 05/12/2024 00:40:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.2624, 'learning_rate': 6.3932e-08, 'epoch': 2.93} 05/12/2024 00:40:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.1346, 'learning_rate': 6.1086e-08, 'epoch': 2.93} 05/12/2024 00:40:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.1768, 'learning_rate': 5.8305e-08, 'epoch': 2.94} 05/12/2024 00:40:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.1430, 'learning_rate': 5.5588e-08, 'epoch': 2.94} 05/12/2024 00:40:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1751, 'learning_rate': 5.2936e-08, 'epoch': 2.94} 05/12/2024 00:41:09 - INFO - llmtuner.extras.callbacks - {'loss': 1.1220, 'learning_rate': 5.0349e-08, 'epoch': 2.94} 05/12/2024 00:41:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1138, 'learning_rate': 4.7826e-08, 'epoch': 2.94} 05/12/2024 00:41:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.1148, 'learning_rate': 4.5368e-08, 'epoch': 2.94} 05/12/2024 00:41:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.2622, 'learning_rate': 4.2975e-08, 'epoch': 2.94} 05/12/2024 00:41:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.2263, 'learning_rate': 4.0647e-08, 'epoch': 2.95} 05/12/2024 00:42:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2043, 'learning_rate': 3.8384e-08, 'epoch': 2.95} 05/12/2024 00:42:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1806, 'learning_rate': 3.6185e-08, 'epoch': 2.95} 05/12/2024 00:42:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.2000, 'learning_rate': 3.4051e-08, 'epoch': 2.95} 05/12/2024 00:42:32 - INFO - llmtuner.extras.callbacks - {'loss': 1.0731, 'learning_rate': 3.1982e-08, 'epoch': 2.95} 05/12/2024 00:42:42 - INFO - llmtuner.extras.callbacks - {'loss': 1.2088, 'learning_rate': 2.9978e-08, 'epoch': 2.95} 05/12/2024 00:42:42 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9600 05/12/2024 00:42:42 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:42:42 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:42:42 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9600/tokenizer_config.json 05/12/2024 00:42:42 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9600/special_tokens_map.json 05/12/2024 00:42:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0820, 'learning_rate': 2.8038e-08, 'epoch': 2.96} 05/12/2024 00:43:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0760, 'learning_rate': 2.6164e-08, 'epoch': 2.96} 05/12/2024 00:43:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0850, 'learning_rate': 2.4354e-08, 'epoch': 2.96} 05/12/2024 00:43:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.1067, 'learning_rate': 2.2609e-08, 'epoch': 2.96} 05/12/2024 00:43:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.2352, 'learning_rate': 2.0929e-08, 'epoch': 2.96} 05/12/2024 00:43:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1577, 'learning_rate': 1.9314e-08, 'epoch': 2.96} 05/12/2024 00:43:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1403, 'learning_rate': 1.7763e-08, 'epoch': 2.96} 05/12/2024 00:44:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0896, 'learning_rate': 1.6278e-08, 'epoch': 2.97} 05/12/2024 00:44:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.2366, 'learning_rate': 1.4857e-08, 'epoch': 2.97} 05/12/2024 00:44:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2478, 'learning_rate': 1.3501e-08, 'epoch': 2.97} 05/12/2024 00:44:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.2521, 'learning_rate': 1.2210e-08, 'epoch': 2.97} 05/12/2024 00:44:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2750, 'learning_rate': 1.0984e-08, 'epoch': 2.97} 05/12/2024 00:45:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.0939, 'learning_rate': 9.8222e-09, 'epoch': 2.97} 05/12/2024 00:45:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.1264, 'learning_rate': 8.7258e-09, 'epoch': 2.98} 05/12/2024 00:45:20 - INFO - llmtuner.extras.callbacks - {'loss': 1.0411, 'learning_rate': 7.6941e-09, 'epoch': 2.98} 05/12/2024 00:45:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.2356, 'learning_rate': 6.7274e-09, 'epoch': 2.98} 05/12/2024 00:45:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.1390, 'learning_rate': 5.8255e-09, 'epoch': 2.98} 05/12/2024 00:45:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.1943, 'learning_rate': 4.9885e-09, 'epoch': 2.98} 05/12/2024 00:46:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.2553, 'learning_rate': 4.2164e-09, 'epoch': 2.98} 05/12/2024 00:46:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1807, 'learning_rate': 3.5091e-09, 'epoch': 2.98} 05/12/2024 00:46:12 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9700 05/12/2024 00:46:13 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:46:13 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:46:13 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9700/tokenizer_config.json 05/12/2024 00:46:13 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/checkpoint-9700/special_tokens_map.json 05/12/2024 00:46:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.2475, 'learning_rate': 2.8667e-09, 'epoch': 2.99} 05/12/2024 00:46:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.9980, 'learning_rate': 2.2892e-09, 'epoch': 2.99} 05/12/2024 00:46:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0908, 'learning_rate': 1.7766e-09, 'epoch': 2.99} 05/12/2024 00:46:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.0700, 'learning_rate': 1.3289e-09, 'epoch': 2.99} 05/12/2024 00:47:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1596, 'learning_rate': 9.4607e-10, 'epoch': 2.99} 05/12/2024 00:47:17 - INFO - llmtuner.extras.callbacks - {'loss': 1.0748, 'learning_rate': 6.2812e-10, 'epoch': 2.99} 05/12/2024 00:47:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.3533, 'learning_rate': 3.7506e-10, 'epoch': 3.00} 05/12/2024 00:47:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.0902, 'learning_rate': 1.8688e-10, 'epoch': 3.00} 05/12/2024 00:47:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.1668, 'learning_rate': 6.3591e-11, 'epoch': 3.00} 05/12/2024 00:47:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.1766, 'learning_rate': 5.1911e-12, 'epoch': 3.00} 05/12/2024 00:47:58 - INFO - transformers.trainer - Training completed. Do not forget to share your model on huggingface.co/models =) 05/12/2024 00:47:58 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27 05/12/2024 00:47:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /home/stu1/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/a8977699a3d0820e80129fb3c93c20fbd9972c41/config.json 05/12/2024 00:47:59 - INFO - transformers.configuration_utils - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.40.2", "use_cache": true, "vocab_size": 128256 } 05/12/2024 00:47:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/tokenizer_config.json 05/12/2024 00:47:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B/lora/train_2024-05-11-18-42-27/special_tokens_map.json 05/12/2024 00:47:59 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}