aashish1904 commited on
Commit
0b0a944
1 Parent(s): 0603424

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +255 -0
README.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ datasets:
6
+ - ystemsrx/Bad_Data_Alpaca
7
+ language:
8
+ - zh
9
+ library_name: transformers
10
+ pipeline_tag: text2text-generation
11
+ tags:
12
+ - Qwen
13
+ - causal-lm
14
+ - fine-tuned
15
+ - ethics
16
+ - Chinese
17
+
18
+ ---
19
+
20
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
21
+
22
+ # QuantFactory/Qwen2-Boundless-GGUF
23
+ This is quantized version of [ystemsrx/Qwen2-Boundless](https://huggingface.co/ystemsrx/Qwen2-Boundless) created using llama.cpp
24
+
25
+ # Original Model Card
26
+
27
+
28
+ [中文](README.zh.md)
29
+
30
+ # Qwen2-Boundless
31
+
32
+ ## Overview
33
+
34
+ Qwen2-Boundless is a fine-tuned model based on Qwen2-1.5B-Instruct, designed to answer various types of questions, including those involving ethical, illegal, pornographic, and violent content. This model has been specifically trained on a dataset that allows it to handle complex and diverse scenarios. It is important to note that the fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese.
35
+
36
+ > **Warning**: This model is intended for research and testing purposes only. Users should comply with local laws and regulations and are responsible for their actions.
37
+
38
+ ## How to Use
39
+
40
+ You can load and use the model with the following code:
41
+
42
+ ```python
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+ import os
45
+
46
+ device = "cuda" # the device to load the model onto
47
+ current_directory = os.path.dirname(os.path.abspath(__file__))
48
+
49
+ model = AutoModelForCausalLM.from_pretrained(
50
+ current_directory,
51
+ torch_dtype="auto",
52
+ device_map="auto"
53
+ )
54
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
55
+
56
+ prompt = "Hello?"
57
+ messages = [
58
+ {"role": "system", "content": ""},
59
+ {"role": "user", "content": prompt}
60
+ ]
61
+ text = tokenizer.apply_chat_template(
62
+ messages,
63
+ tokenize=False,
64
+ add_generation_prompt=True
65
+ )
66
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
67
+
68
+ generated_ids = model.generate(
69
+ model_inputs.input_ids,
70
+ max_new_tokens=512
71
+ )
72
+ generated_ids = [
73
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
74
+ ]
75
+
76
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
77
+ print(response)
78
+ ```
79
+
80
+ ### Continuous Conversation
81
+
82
+ To enable continuous conversation, use the following code:
83
+
84
+ ```python
85
+ from transformers import AutoModelForCausalLM, AutoTokenizer
86
+ import torch
87
+ import os
88
+
89
+ device = "cuda" # the device to load the model onto
90
+
91
+ # Get the current script's directory
92
+ current_directory = os.path.dirname(os.path.abspath(__file__))
93
+
94
+ model = AutoModelForCausalLM.from_pretrained(
95
+ current_directory,
96
+ torch_dtype="auto",
97
+ device_map="auto"
98
+ )
99
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
100
+
101
+ messages = [
102
+ {"role": "system", "content": ""}
103
+ ]
104
+
105
+ while True:
106
+ # Get user input
107
+ user_input = input("User: ")
108
+
109
+ # Add user input to the conversation
110
+ messages.append({"role": "user", "content": user_input})
111
+
112
+ # Prepare the input text
113
+ text = tokenizer.apply_chat_template(
114
+ messages,
115
+ tokenize=False,
116
+ add_generation_prompt=True
117
+ )
118
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
119
+
120
+ # Generate a response
121
+ generated_ids = model.generate(
122
+ model_inputs.input_ids,
123
+ max_new_tokens=512
124
+ )
125
+ generated_ids = [
126
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
127
+ ]
128
+
129
+ # Decode and print the response
130
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
131
+ print(f"Assistant: {response}")
132
+
133
+ # Add the generated response to the conversation
134
+ messages.append({"role": "assistant", "content": response})
135
+ ```
136
+
137
+ ### Streaming Response
138
+
139
+ For applications requiring streaming responses, use the following code:
140
+
141
+ ```python
142
+ import torch
143
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
144
+ from transformers.trainer_utils import set_seed
145
+ from threading import Thread
146
+ import random
147
+ import os
148
+
149
+ DEFAULT_CKPT_PATH = os.path.dirname(os.path.abspath(__file__))
150
+
151
+ def _load_model_tokenizer(checkpoint_path, cpu_only):
152
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint_path, resume_download=True)
153
+
154
+ device_map = "cpu" if cpu_only else "auto"
155
+
156
+ model = AutoModelForCausalLM.from_pretrained(
157
+ checkpoint_path,
158
+ torch_dtype="auto",
159
+ device_map=device_map,
160
+ resume_download=True,
161
+ ).eval()
162
+ model.generation_config.max_new_tokens = 512 # For chat.
163
+
164
+ return model, tokenizer
165
+
166
+ def _get_input() -> str:
167
+ while True:
168
+ try:
169
+ message = input('User: ').strip()
170
+ except UnicodeDecodeError:
171
+ print('[ERROR] Encoding error in input')
172
+ continue
173
+ except KeyboardInterrupt:
174
+ exit(1)
175
+ if message:
176
+ return message
177
+ print('[ERROR] Query is empty')
178
+
179
+ def _chat_stream(model, tokenizer, query, history):
180
+ conversation = [
181
+ {'role': 'system', 'content': ''},
182
+ ]
183
+ for query_h, response_h in history:
184
+ conversation.append({'role': 'user', 'content': query_h})
185
+ conversation.append({'role': 'assistant', 'content': response_h})
186
+ conversation.append({'role': 'user', 'content': query})
187
+ inputs = tokenizer.apply_chat_template(
188
+ conversation,
189
+ add_generation_prompt=True,
190
+ return_tensors='pt',
191
+ )
192
+ inputs = inputs.to(model.device)
193
+ streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
194
+ generation_kwargs = dict(
195
+ input_ids=inputs,
196
+ streamer=streamer,
197
+ )
198
+ thread = Thread(target=model.generate, kwargs=generation_kwargs)
199
+ thread.start()
200
+
201
+ for new_text in streamer:
202
+ yield new_text
203
+
204
+ def main():
205
+ checkpoint_path = DEFAULT_CKPT_PATH
206
+ seed = random.randint(0, 2**32 - 1) # Generate a random seed
207
+ set_seed(seed) # Set the random seed
208
+ cpu_only = False
209
+
210
+ history = []
211
+
212
+ model, tokenizer = _load_model_tokenizer(checkpoint_path, cpu_only)
213
+
214
+ while True:
215
+ query = _get_input()
216
+
217
+ print(f"\nUser: {query}")
218
+ print(f"\nAssistant: ", end="")
219
+ try:
220
+ partial_text = ''
221
+ for new_text in _chat_stream(model, tokenizer, query, history):
222
+ print(new_text, end='', flush=True)
223
+ partial_text += new_text
224
+ print()
225
+ history.append((query, partial_text))
226
+
227
+ except KeyboardInterrupt:
228
+ print('Generation interrupted')
229
+ continue
230
+
231
+ if __name__ == "__main__":
232
+ main()
233
+ ```
234
+
235
+ ## Dataset
236
+
237
+ The Qwen2-Boundless model was fine-tuned using a specific dataset named `bad_data.json`, which includes a wide range of text content covering topics related to ethics, law, pornography, and violence. The fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese. If you are interested in exploring or using this dataset, you can find it via the following link:
238
+
239
+ - [bad_data.json Dataset](https://huggingface.co/datasets/ystemsrx/Bad_Data_Alpaca)
240
+
241
+ And also we used some cybersecurity-related data that was cleaned and organized from [this file](https://github.com/Clouditera/SecGPT/blob/main/secgpt-mini/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%9B%9E%E7%AD%94%E9%9D%A2%E9%97%AE%E9%A2%98-cot.txt).
242
+
243
+ ## GitHub Repository
244
+
245
+ For more details about the model and ongoing updates, please visit our GitHub repository:
246
+
247
+ - [GitHub: ystemsrx/Qwen2-Boundless](https://github.com/ystemsrx/Qwen2-Boundless)
248
+
249
+ ## License
250
+
251
+ This model and dataset are open-sourced under the Apache 2.0 License.
252
+
253
+ ## Disclaimer
254
+
255
+ All content provided by this model is for research and testing purposes only. The developers of this model are not responsible for any potential misuse. Users should comply with relevant laws and regulations and are solely responsible for their actions.