ystemsrx commited on
Commit
94a00c1
1 Parent(s): 5df8bc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +240 -3
README.md CHANGED
@@ -1,3 +1,240 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ystemsrx/Bad_Data_Alpaca
5
+ language:
6
+ - zh
7
+ library_name: transformers
8
+ pipeline_tag: text2text-generation
9
+ tags:
10
+ - Qwen
11
+ - causal-lm
12
+ - fine-tuned
13
+ - ethics
14
+ - Chinese
15
+ ---
16
+
17
+ # Qwen2-Boundless
18
+
19
+ ## Overview
20
+
21
+ Qwen2-Boundless is a fine-tuned model based on Qwen2-1.5B-Instruct, designed to answer various types of questions, including those involving ethical, illegal, pornographic, and violent content. This model has been specifically trained on a dataset that allows it to handle complex and diverse scenarios. It is important to note that the fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese.
22
+
23
+ > **Warning**: This model is intended for research and testing purposes only. Users should comply with local laws and regulations and are responsible for their actions.
24
+
25
+ ## How to Use
26
+
27
+ You can load and use the model with the following code:
28
+
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+ import os
32
+
33
+ device = "cuda" # the device to load the model onto
34
+ current_directory = os.path.dirname(os.path.abspath(__file__))
35
+
36
+ model = AutoModelForCausalLM.from_pretrained(
37
+ current_directory,
38
+ torch_dtype="auto",
39
+ device_map="auto"
40
+ )
41
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
42
+
43
+ prompt = "Hello?"
44
+ messages = [
45
+ {"role": "system", "content": ""},
46
+ {"role": "user", "content": prompt}
47
+ ]
48
+ text = tokenizer.apply_chat_template(
49
+ messages,
50
+ tokenize=False,
51
+ add_generation_prompt=True
52
+ )
53
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
54
+
55
+ generated_ids = model.generate(
56
+ model_inputs.input_ids,
57
+ max_new_tokens=512
58
+ )
59
+ generated_ids = [
60
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
61
+ ]
62
+
63
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
64
+ print(response)
65
+ ```
66
+
67
+ ### Continuous Conversation
68
+
69
+ To enable continuous conversation, use the following code:
70
+
71
+ ```python
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+ import torch
74
+ import os
75
+
76
+ device = "cuda" # the device to load the model onto
77
+
78
+ # Get the current script's directory
79
+ current_directory = os.path.dirname(os.path.abspath(__file__))
80
+
81
+ model = AutoModelForCausalLM.from_pretrained(
82
+ current_directory,
83
+ torch_dtype="auto",
84
+ device_map="auto"
85
+ )
86
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
87
+
88
+ messages = [
89
+ {"role": "system", "content": "You are a helpful assistant."}
90
+ ]
91
+
92
+ while True:
93
+ # Get user input
94
+ user_input = input("User: ")
95
+
96
+ # Add user input to the conversation
97
+ messages.append({"role": "user", "content": user_input})
98
+
99
+ # Prepare the input text
100
+ text = tokenizer.apply_chat_template(
101
+ messages,
102
+ tokenize=False,
103
+ add_generation_prompt=True
104
+ )
105
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
106
+
107
+ # Generate a response
108
+ generated_ids = model.generate(
109
+ model_inputs.input_ids,
110
+ max_new_tokens=512
111
+ )
112
+ generated_ids = [
113
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
114
+ ]
115
+
116
+ # Decode and print the response
117
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
118
+ print(f"Assistant: {response}")
119
+
120
+ # Add the generated response to the conversation
121
+ messages.append({"role": "assistant", "content": response})
122
+ ```
123
+
124
+ ### Streaming Response
125
+
126
+ For applications requiring streaming responses, use the following code:
127
+
128
+ ```python
129
+ import torch
130
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
131
+ from transformers.trainer_utils import set_seed
132
+ from threading import Thread
133
+ import random
134
+ import os
135
+
136
+ DEFAULT_CKPT_PATH = os.path.dirname(os.path.abspath(__file__))
137
+
138
+ def _load_model_tokenizer(checkpoint_path, cpu_only):
139
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint_path, resume_download=True)
140
+
141
+ device_map = "cpu" if cpu_only else "auto"
142
+
143
+ model = AutoModelForCausalLM.from_pretrained(
144
+ checkpoint_path,
145
+ torch_dtype="auto",
146
+ device_map=device_map,
147
+ resume_download=True,
148
+ ).eval()
149
+ model.generation_config.max_new_tokens = 512 # For chat.
150
+
151
+ return model, tokenizer
152
+
153
+ def _get_input() -> str:
154
+ while True:
155
+ try:
156
+ message = input('User: ').strip()
157
+ except UnicodeDecodeError:
158
+ print('[ERROR] Encoding error in input')
159
+ continue
160
+ except KeyboardInterrupt:
161
+ exit(1)
162
+ if message:
163
+ return message
164
+ print('[ERROR] Query is empty')
165
+
166
+ def _chat_stream(model, tokenizer, query, history):
167
+ conversation = [
168
+ {'role': 'system', 'content': ''},
169
+ ]
170
+ for query_h, response_h in history:
171
+ conversation.append({'role': 'user', 'content': query_h})
172
+ conversation.append({'role': 'assistant', 'content': response_h})
173
+ conversation.append({'role': 'user', 'content': query})
174
+ inputs = tokenizer.apply_chat_template(
175
+ conversation,
176
+ add_generation_prompt=True,
177
+ return_tensors='pt',
178
+ )
179
+ inputs = inputs.to(model.device)
180
+ streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
181
+ generation_kwargs = dict(
182
+ input_ids=inputs,
183
+ streamer=streamer,
184
+ )
185
+ thread = Thread(target=model.generate, kwargs=generation_kwargs)
186
+ thread.start()
187
+
188
+ for new_text in streamer:
189
+ yield new_text
190
+
191
+ def main():
192
+ checkpoint_path = DEFAULT_CKPT_PATH
193
+ seed = random.randint(0, 2**32 - 1) # Generate a random seed
194
+ set_seed(seed) # Set the random seed
195
+ cpu_only = False
196
+
197
+ history = []
198
+
199
+ model, tokenizer = _load_model_tokenizer(checkpoint_path, cpu_only)
200
+
201
+ while True:
202
+ query = _get_input()
203
+
204
+ print(f"\nUser: {query}")
205
+ print(f"\nAssistant: ", end="")
206
+ try:
207
+ partial_text = ''
208
+ for new_text in _chat_stream(model, tokenizer, query, history):
209
+ print(new_text, end='', flush=True)
210
+ partial_text += new_text
211
+ print()
212
+ history.append((query, partial_text))
213
+
214
+ except KeyboardInterrupt:
215
+ print('Generation interrupted')
216
+ continue
217
+
218
+ if __name__ == "__main__":
219
+ main()
220
+ ```
221
+
222
+ ## Dataset
223
+
224
+ The Qwen2-Boundless model was fine-tuned using a specific dataset named `bad_data.json`, which includes a wide range of text content covering topics related to ethics, law, pornography, and violence. The fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese. If you are interested in exploring or using this dataset, you can find it via the following link:
225
+
226
+ - [bad_data.json Dataset](https://huggingface.co/datasets/ystemsrx/bad_data.json)
227
+
228
+ ## GitHub Repository
229
+
230
+ For more details about the model and ongoing updates, please visit our GitHub repository:
231
+
232
+ - [GitHub: ystemsrx/Qwen2-Boundless](https://github.com/ystemsrx/Qwen2-Boundless)
233
+
234
+ ## License
235
+
236
+ This model and dataset are open-sourced under the Apache 2.0 License. For more information, please refer to the [LICENSE](https://github.com/ystemsrx/Qwen2-Boundless/blob/main/LICENSE) file.
237
+
238
+ ## Disclaimer
239
+
240
+ All content provided by this model is for research and testing purposes only. The developers of this model are not responsible for any potential misuse. Users should comply with relevant laws and regulations and are solely responsible for their actions.