AMKCode's picture
Upload README.md with huggingface_hub
1d1f18c verified
|
raw
history blame
1.19 kB
---
base_model: google/gemma-2-2b-jpn-it
language:
- ja
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
- mlc-ai
- MLC-Weight-Conversion
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# AMKCode/gemma-2-2b-jpn-it-q4f32_1-MLC
This model was compiled using MLC-LLM with q4f32_1 quantization from [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it).
The conversion was done using the [MLC-Weight-Conversion](https://huggingface.co/spaces/mlc-ai/MLC-Weight-Conversion) space.
To run this model, please first install [MLC-LLM](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
To chat with the model on your terminal:
```bash
mlc_llm chat HF://AMKCode/gemma-2-2b-jpn-it-q4f32_1-MLC
```
For more information on how to use MLC-LLM, please visit the MLC-LLM [documentation](https://llm.mlc.ai/docs/index.html).