ymcki commited on
Commit
ca09138
1 Parent(s): 7e4b1a4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -2,6 +2,9 @@
2
  base_model: google/gemma-2-2b-jpn-it
3
  language:
4
  - multilingual
 
 
 
5
  library_name: transformers
6
  license: gemma
7
  license_link: https://ai.google.dev/gemma/terms
@@ -37,8 +40,10 @@ described by mlabonne.
37
 
38
  Layer 18 of the original model was chosen for abliteration.
39
  I also created another layer 17 abliterated model for comparison.
 
 
40
 
41
- It is uploaded here to be evaluated by the LLM Leaderboard to see how brain damaged it
42
  is compared to the original model.
43
 
44
  ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
@@ -47,13 +52,13 @@ ORPO fine tuning is currently underway to see if it can regain its sanity. You c
47
 
48
  Click on the model name go to the raw score json generated by Open LLM Leaderboard.
49
 
50
- | Model | Average | IFEval | BHH | Math Lv5 | MUSR | MMLU-PRO |
51
- | ----- | ------- | ------ | ----|--------- | ---- | -------- |
52
  | [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
53
- | [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-16T07-58-03.781979.json) 16.74 | 0.0 | 29.13 | 0.0 | 25.92 | 33.73 | 11.68 |
54
- | gemma-2-2b-jpn-it-abliterated-17 | TBD | TBD | TBD | TBD | TBD | TBD |
55
 
56
- Indeed, it is quite dumbed down relative to the original.
57
 
58
  ## How to run this model
59
 
 
2
  base_model: google/gemma-2-2b-jpn-it
3
  language:
4
  - multilingual
5
+ datasets:
6
+ - mlabonne/harmless_alpaca
7
+ - mlabonne/harmful_behaviors
8
  library_name: transformers
9
  license: gemma
10
  license_link: https://ai.google.dev/gemma/terms
 
40
 
41
  Layer 18 of the original model was chosen for abliteration.
42
  I also created another layer 17 abliterated model for comparison.
43
+ These two layers were chosen due to they both produce uncensored response
44
+ after respective layer was abliterated.
45
 
46
+ It is uploaded here to be evaluated by the Open LLM Leaderboard to see how brain damaged it
47
  is compared to the original model.
48
 
49
  ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
 
52
 
53
  Click on the model name go to the raw score json generated by Open LLM Leaderboard.
54
 
55
+ | Model | Average | IFEval | BHH | Math Lv5 | GPQA | MUSR | MMLU-PRO |
56
+ | ----- | ------- | ------ | ----|--------- | ---- | ---- | -------- |
57
  | [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
58
+ | [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-18T15-18-46.821674.json) | 30.29 | 52.65 | 40.46 | 0.0 | 27.18 | 36.90 | 24.55 |
59
+ | [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-18T15-41-42.399571.json) | 30.61 | 53.02 | 40.96 | 0.0 | 27.35 | 37.30 | 25.05 |
60
 
61
+ It is only slightly dumber than the original.
62
 
63
  ## How to run this model
64