thr3a's picture
c969832f9fa6b3d26c5af4c5839066af11aa5bfd93f0a644d9383458a6a4d641
245a9e3 verified
|
raw
history blame
1.25 kB
metadata
base_model: google/gemma-2-2b-jpn-it
language:
  - ja
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
  - conversational
  - mlx
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
  To access Gemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license

thr3a/gemma-2-2b-jpn-it-mlx

The Model thr3a/gemma-2-2b-jpn-it-mlx was converted to MLX format from google/gemma-2-2b-jpn-it using mlx-lm version 0.19.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("thr3a/gemma-2-2b-jpn-it-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)