Edit model card

Quant Infos

  • quants done with an importance matrix for improved quantization loss
  • ggufs & imatrix generated from bf16 for "optimal" accuracy loss
  • Wide coverage of different gguf quant types from Q_8_0 down to IQ1_S
  • Quantized with llama.cpp commit fabf30b4c4fca32e116009527180c252919ca922 (master as of 2024-05-20)
  • Imatrix generated with this multi-purpose dataset.
    ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
    

Original Model Card:

πŸ™ GitHub β€’ πŸ‘Ύ Discord β€’ 🐀 Twitter β€’ πŸ’¬ WeChat
πŸ“ Paper β€’ πŸ™Œ FAQ β€’ πŸ“— Learning Hub

Intro

Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.

Model Context Length Pre-trained Tokens
Yi-1.5 4K 3.6T

Models

Benchmarks

  • Chat models

    Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

    image/png

    Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

    image/png

  • Base models

    Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

    image/png

    Yi-1.5-9B is the top performer among similarly sized open-source models.

    image/png

Quick Start

For getting up and running with Yi-1.5 models quickly, see README.

Downloads last month
98
GGUF
Model size
8.83B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for qwp4w3hyb/Yi-1.5-9B-Chat-16K-iMat-GGUF

Quantized
(12)
this model

Collection including qwp4w3hyb/Yi-1.5-9B-Chat-16K-iMat-GGUF