ymcki commited on
Commit
cf95426
1 Parent(s): 40d9c5f

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -31,7 +31,7 @@ Original model: https://huggingface.co/google/gemma-2-2b-jpn-it
31
 
32
  Note that this model does not support a System prompt.
33
 
34
- This is abliterated model of [`google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using the
35
  [method](https://medium.com/@mlabonne/uncensor-any-llm-with-abliteration-d30148b7d43e)
36
  described by mlabonne.
37
 
@@ -45,12 +45,13 @@ ORPO fine tuning is currently underway to see if it can regain its sanity. You c
45
 
46
  ## Benchmark (100.0*raw scores only)
47
 
48
- Click on the average number to go to the raw score json generated by Open LLM Leaderboard.
49
 
50
  | Model | Average | IFEval | BHH | Math Lv5 | MUSR | MMLU-PRO |
51
  | ----- | ------- | ------ | ----|--------- | ---- | -------- |
52
- | [gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) | [30.82](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
53
- | [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/google/gemma-2-2b-jpn-it-abliterated-18) | [16.74](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-16T07-58-03.781979.json) | 0.0 | 29.13 | 0.0 | 25.92 | 33.73 | 11.68 |
 
54
 
55
  Indeed, it is quite dumbed down relative to the original.
56
 
 
31
 
32
  Note that this model does not support a System prompt.
33
 
34
+ This is abliterated model of [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using the
35
  [method](https://medium.com/@mlabonne/uncensor-any-llm-with-abliteration-d30148b7d43e)
36
  described by mlabonne.
37
 
 
45
 
46
  ## Benchmark (100.0*raw scores only)
47
 
48
+ Click on the model name go to the raw score json generated by Open LLM Leaderboard.
49
 
50
  | Model | Average | IFEval | BHH | Math Lv5 | MUSR | MMLU-PRO |
51
  | ----- | ------- | ------ | ----|--------- | ---- | -------- |
52
+ | [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
53
+ | [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-16T07-58-03.781979.json) 16.74 | 0.0 | 29.13 | 0.0 | 25.92 | 33.73 | 11.68 |
54
+ | gemma-2-2b-jpn-it-abliterated-17 | TBD | TBD | TBD | TBD | TBD | TBD |
55
 
56
  Indeed, it is quite dumbed down relative to the original.
57