ymcki commited on
Commit
6e0b3d6
1 Parent(s): c2901c5

Upload 8 files

Browse files
Files changed (1) hide show
  1. README.md +84 -3
README.md CHANGED
@@ -1,3 +1,84 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-2-2b-jpn-it
3
+ language:
4
+ - multilingual
5
+ library_name: transformers
6
+ license: gemma
7
+ license_link: https://ai.google.dev/gemma/terms
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - nlp
11
+ - code
12
+ quantized_by: ymcki
13
+ widget:
14
+ - messages:
15
+ - role: user
16
+ content: Can you provide ways to eat combinations of bananas and dragonfruits?
17
+ ---
18
+
19
+ Original model: https://huggingface.co/google/gemma-2-2b-jpn-it
20
+
21
+ ## Prompt format
22
+
23
+ ```
24
+ <start_of_turn>user
25
+ {prompt}<end_of_turn>
26
+ <start_of_turn>model
27
+ <end_of_turn>
28
+ <start_of_turn>model
29
+
30
+ ```
31
+
32
+ Note that this model does not support a System prompt.
33
+
34
+ This is abliterated model of [`google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using the
35
+ [method](https://medium.com/@mlabonne/uncensor-any-llm-with-abliteration-d30148b7d43e)
36
+ described by mlabonne.
37
+
38
+ Layer 18 of the original model was chosen for abliteration.
39
+ I also created another layer 17 abliterated model for comparison.
40
+
41
+ It is uploaded here to be evaluated by the LLM Leaderboard to see how brain damaged it
42
+ is compared to the original model.
43
+
44
+ ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
45
+
46
+ ## How to run this model
47
+
48
+ ```py
49
+ from transformers import AutoTokenizer, AutoModelForCausalLM
50
+ import transformers
51
+ import torch
52
+
53
+ model_id = "gemma-2-2b-jpn-it-abliterated-18"
54
+ dtype = torch.bfloat16
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
57
+ model = AutoModelForCausalLM.from_pretrained(
58
+ model_id,
59
+ device_map="cuda",
60
+ torch_dtype=dtype,)
61
+
62
+ chat = [
63
+ { "role": "user", "content": "Write a hello world program" },
64
+ ]
65
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
66
+ ```
67
+
68
+ ## Downloading using huggingface-cli
69
+
70
+ First, make sure you have hugginface-cli installed:
71
+
72
+ ```
73
+ pip install -U "huggingface_hub[cli]"
74
+ ```
75
+
76
+ Then, you can target the specific file you want:
77
+
78
+ ```
79
+ huggingface-cli download ymcki/gemma-2-2b-jpn-it-abliterated-18 --include "*" --local-dir ./
80
+ ```
81
+
82
+ ## Credits
83
+
84
+ Thank you mlabonne for describing his abliteration method.