thr3a's picture
4e0cfbfd681c8e74ead6f08db4bb171a84a2fe9f07cf4b30db9ca1f4be4d6988
52176d4 verified
|
raw
history blame
991 Bytes
metadata
base_model: rinna/gemma-2-baku-2b-it
language:
  - ja
  - en
license: gemma
tags:
  - gemma2
  - conversational
  - mlx
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
base_model_relation: merge

thr3a/gemma-2-baku-2b-it-mlx

The Model thr3a/gemma-2-baku-2b-it-mlx was converted to MLX format from rinna/gemma-2-baku-2b-it using mlx-lm version 0.19.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("thr3a/gemma-2-baku-2b-it-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)