|
--- |
|
base_model: google/gemma-2-2b-jpn-it |
|
language: |
|
- multilingual |
|
datasets: |
|
- mlabonne/orpo-dpo-mix-40k |
|
library_name: transformers |
|
license: gemma |
|
license_link: https://ai.google.dev/gemma/terms |
|
pipeline_tag: text-generation |
|
tags: |
|
- nlp |
|
- code |
|
quantized_by: ymcki |
|
widget: |
|
- messages: |
|
- role: user |
|
content: Can you provide ways to eat combinations of bananas and dragonfruits? |
|
--- |
|
|
|
Original model: https://huggingface.co/google/gemma-2-2b-jpn-it |
|
|
|
## Prompt format |
|
|
|
``` |
|
<start_of_turn>user |
|
{prompt}<end_of_turn> |
|
<start_of_turn>model |
|
<end_of_turn> |
|
<start_of_turn>model |
|
|
|
``` |
|
|
|
Note that this model does not support a System prompt. |
|
|
|
This is abliterated model of [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using the |
|
[method](https://medium.com/@mlabonne/uncensor-any-llm-with-abliteration-d30148b7d43e) |
|
described by mlabonne. |
|
|
|
Layer 17 of the original model was chosen for abliteration. |
|
I also created another layer 18 abliterated model for comparison. |
|
|
|
ORPO fine tuning was performed for four epoches. |
|
|
|
| Epoch | loss | eval_loss | |
|
| ----- | ---- | --------- | |
|
| 1 | 10.51274610161781342 | 11.023366928100586 | |
|
| 2 | 10.09700682163238566 | 10.434176445007324 | |
|
| 3 | 10.35771694183349566 | 10.179500579833984 | |
|
| 4 | 10.82988178133964582 | 10.084120750427246 | |
|
|
|
The fine tuned model is uploaded here to be evaluated by the Open LLM Leaderboard to see if the brain damaged |
|
suffered by the non-ORPO model can be healed. |
|
|
|
## Benchmark (100.0*raw scores only) |
|
|
|
Click on the model name go to the raw score json generated by Open LLM Leaderboard. |
|
|
|
| Model | Average | IFEval | BHH | Math Lv5 | GPQA | MUSR | MMLU-PRO | |
|
| ----- | ------- | ------ | ----|--------- | ---- | ---- | -------- | |
|
| [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 | |
|
| gemma-2-2b-jpn-it-abliterated-17-ORPO | TBD | TBD | TBD | TBD | TBD | TBD | TBD | |
|
| [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-17T11-26-10.721815.json) | 16.74 | 0.0 | 29.13 | 0.0 | 25.92 | 33.73 | 11.68 | |
|
| [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-16T07-58-03.781979.json) | 16.74 | 0.0 | 29.13 | 0.0 | 25.92 | 33.73 | 11.68 | |
|
|
|
Indeed, it is quite dumbed down relative to the original. Interestingly, both abliteration models have the same Open LLM results. |
|
|
|
## How to run this model |
|
|
|
```py |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import transformers |
|
import torch |
|
|
|
model_id = "gemma-2-2b-jpn-it-abliterated-17-ORPO" |
|
dtype = torch.bfloat16 |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
device_map="cuda", |
|
torch_dtype=dtype,) |
|
|
|
chat = [ |
|
{ "role": "user", "content": "Write a hello world program" }, |
|
] |
|
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) |
|
``` |
|
|
|
## Downloading using huggingface-cli |
|
|
|
First, make sure you have hugginface-cli installed: |
|
|
|
``` |
|
pip install -U "huggingface_hub[cli]" |
|
``` |
|
|
|
Then, you can target the specific file you want: |
|
|
|
``` |
|
huggingface-cli download ymcki/gemma-2-2b-jpn-it-abliterated-17-ORPO --include "*" --local-dir ./ |
|
``` |
|
|
|
## Credits |
|
|
|
Thank you mlabonne for describing his abliteration method. |
|
|