build: 3785 (64c6af31) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from Qwen2.5-0.5B-Instruct-IMat-GGUF/Qwen2.5-0.5B-Instruct.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 0.5B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 0.5B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-0... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 0.5B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-0.5B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 24 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 896 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 4864 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 14 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 7 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q8_0: 169 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 896 llm_load_print_meta: n_layer = 24 llm_load_print_meta: n_head = 14 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 128 llm_load_print_meta: n_embd_v_gqa = 128 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 4864 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 494.03 M llm_load_print_meta: model size = 500.79 MiB (8.50 BPW) llm_load_print_meta: general.name = Qwen2.5 0.5B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.25 MiB llm_load_tensors: offloading 24 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 25/25 layers to GPU llm_load_tensors: CPU buffer size = 137.94 MiB llm_load_tensors: CUDA0 buffer size = 500.84 MiB ........................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 6.00 MiB llama_new_context_with_model: KV self size = 6.00 MiB, K (f16): 3.00 MiB, V (f16): 3.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.58 MiB llama_new_context_with_model: CUDA0 compute buffer size = 298.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 2.76 MiB llama_new_context_with_model: graph nodes = 846 llama_new_context_with_model: graph splits = 2 system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 138.654 ms compute_imatrix: computing over 128 chunks with batch_size 512 compute_imatrix: 0.36 seconds per pass - ETA 0.75 minutes [1]7.9682,[2]5.9445,[3]5.8811,[4]7.0440,[5]6.8317,[6]6.3818,[7]7.1962,[8]7.1075,[9]7.7926,[10]7.3545,[11]7.1359,[12]7.8729,[13]8.8051,[14]9.0799,[15]9.8729,[16]10.4115,[17]10.6743,[18]11.3420,[19]10.8674,[20]10.9033,[21]11.1268,[22]11.1346,[23]10.9051,[24]11.2866,[25]11.5903,[26]11.4937,[27]11.8234,[28]12.1064,[29]12.5883,[30]12.5654,[31]12.0887,[32]11.5623,[33]11.2204,[34]11.0625,[35]10.8576,[36]10.8785,[37]10.9343,[38]11.1171,[39]11.1783,[40]11.4766,[41]11.5545,[42]12.0473,[43]12.5054,[44]12.9404,[45]13.2580,[46]13.4466,[47]13.2806,[48]13.3244,[49]13.4145,[50]13.4653,[51]13.3043,[52]13.3726,[53]13.6527,[54]13.8243,[55]14.0258,[56]14.1177,[57]14.1280,[58]14.1928,[59]14.2089,[60]14.2025,[61]14.1319,[62]14.0815,[63]14.1511,[64]14.2516,[65]14.1177,[66]14.0853,[67]14.0350,[68]13.8819,[69]13.7514,[70]13.7061,[71]13.5941,[72]13.5378,[73]13.5276,[74]13.3511,[75]13.2085,[76]13.0566,[77]12.9796,[78]12.9334,[79]12.8661,[80]12.7301,[81]12.7526,[82]12.7203,[83]12.6196,[84]12.6556,[85]12.6450,[86]12.5210,[87]12.4733,[88]12.4713,[89]12.5217,[90]12.5637,[91]12.5665,[92]12.4282,[93]12.3183,[94]12.1736,[95]12.0442,[96]11.9314,[97]11.8035,[98]11.6834,[99]11.6557,[100]11.6743,[101]11.7112,[102]11.8812,[103]12.0334,[104]12.1496,[105]12.3488,[106]12.4719,[107]12.5221,[108]12.4606,[109]12.4409,[110]12.4464,[111]12.4179,[112]12.3578,[113]12.3950,[114]12.4549,[115]12.4594,[116]12.4827,[117]12.4992,[118]12.5449,[119]12.5426,[120]12.5279,[121]12.5243,[122]12.4539,[123]12.5254,[124]12.6199,[125]12.7012,[126]12.8118,[127]12.8985,[128]12.9894, Final estimate: PPL = 12.9894 +/- 0.20056 llama_perf_context_print: load time = 961.75 ms llama_perf_context_print: prompt eval time = 17565.37 ms / 65536 tokens ( 0.27 ms per token, 3730.98 tokens per second) llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 19120.52 ms / 65537 tokens