--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png language: - ja - en tags: - qwen inference: false license: other license_name: tongyi-qianwen-license-agreement license_link: >- https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT pipeline_tag: text-generation base_model: rinna/nekomata-7b-instruction base_model_relation: quantized --- # `rinna/nekomata-7b-instruction-gguf` ![rinna-icon](./rinna.png) # Overview The model is the GGUF version of [`rinna/nekomata-7b-instruction`](https://huggingface.co/rinna/nekomata-7b-instruction). It can be used with [llama.cpp](https://github.com/ggerganov/llama.cpp) for lightweight inference. Quantization of this model may cause stability issue in GPTQ, AWQ and GGUF q4_0. We recommend **GGUF q4_K_M** for 4-bit quantization. See [`rinna/nekomata-7b-instruction`](https://huggingface.co/rinna/nekomata-7b-instruction) for details about model architecture and data. * **Contributors** - [Toshiaki Wakatsuki](https://huggingface.co/t-w) - [Tianyu Zhao](https://huggingface.co/tianyuz) - [Kei Sawada](https://huggingface.co/keisawada) --- # How to use the model See [llama.cpp](https://github.com/ggerganov/llama.cpp) for more usage details. ~~~~bash git clone https://github.com/ggerganov/llama.cpp cd llama.cpp make MODEL_PATH=/path/to/nekomata-7b-instruction-gguf/nekomata-7b-instruction.Q4_K_M.gguf MAX_N_TOKENS=512 PROMPT_INSTRUCTION="次の日本語を英語に翻訳してください。" PROMPT_INPUT="大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。" PROMPT="以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n${PROMPT_INSTRUCTION}\n\n### 入力:\n${PROMPT_INPUT}\n\n### 応答:\n" ./main -m ${MODEL_PATH} -n ${MAX_N_TOKENS} -p "${PROMPT}" ~~~~ --- # Tokenization Please refer to [`rinna/nekomata-7b`](https://huggingface.co/rinna/nekomata-7b) for tokenization details. --- # How to cite ```bibtex @misc{rinna-nekomata-7b-instruction-gguf, title = {rinna/nekomata-7b-instruction-gguf}, author = {Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}, url = {https://huggingface.co/rinna/nekomata-7b-instruction-gguf} } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, pages = {13898--13905}, url = {https://aclanthology.org/2024.lrec-main.1213}, note = {\url{https://arxiv.org/abs/2404.01657}} } ``` --- # License [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)