Suparious commited on
Commit
914bdd0
1 Parent(s): ec993c4

Update model Card

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md CHANGED
@@ -1,3 +1,135 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - finetuned
4
+ - quantized
5
+ - 4-bit
6
+ - AWQ
7
+ - transformers
8
+ - pytorch
9
+ - mistral
10
+ - instruct
11
+ - text-generation
12
+ - conversational
13
+ - license:apache-2.0
14
+ - autotrain_compatible
15
+ - endpoints_compatible
16
+ - text-generation-inference
17
+ - finetune
18
+ - chatml
19
+ - generated_from_trainer
20
+ model-index:
21
+ - name: Senzu-7B-v0.1-DPO
22
+ results: []
23
  license: apache-2.0
24
+ base_model: mistralai/Mistral-7B-v0.1
25
+ datasets:
26
+ - practical-dreamer/RPGPT_PublicDomain-alpaca
27
+ - shuyuej/metamath_gsm8k
28
+ - NeuralNovel/Neural-DPO
29
+ language:
30
+ - en
31
+ quantized_by: Suparious
32
+ pipeline_tag: text-generation
33
+ model_creator: NeuralNovel
34
+ model_name: Senzu 7B 0.1 DPO
35
+ inference: false
36
+ prompt_template: '<|im_start|>system
37
+
38
+ {system_message}<|im_end|>
39
+
40
+ <|im_start|>user
41
+
42
+ {prompt}<|im_end|>
43
+
44
+ <|im_start|>assistant
45
+
46
+ '
47
  ---
48
+
49
+ # NeuralNovel/Senzu-7B-v0.1-DPO
50
+
51
+ - Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel)
52
+ - Original model: [Senzu-7B-v0.1](https://huggingface.co/NeuralNovel/Senzu-7B-v0.1)
53
+
54
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/FXt-g2q8JE-l77_gp23T3.jpeg)
55
+
56
+ ## Model Details
57
+
58
+ This model is Senzu-7B-v0.1 a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
59
+
60
+ DPO Trained on the Neural-DPO dataset.
61
+
62
+ Trained on the Neural-DPO
63
+ This model excels at character roleplay, also with the ability of responding accurately to a wide variety of complex questions.
64
+
65
+ ## How to use
66
+
67
+ ### Install the necessary packages
68
+
69
+ ```bash
70
+ pip install --upgrade autoawq autoawq-kernels
71
+ ```
72
+
73
+ ### Example Python code
74
+
75
+ ```python
76
+ from awq import AutoAWQForCausalLM
77
+ from transformers import AutoTokenizer, TextStreamer
78
+
79
+ model_path = "solidrust/Nous-Hermes-2-Mistral-7B-DPO-AWQ"
80
+ system_message = "You are Hermes, incarnated a powerful AI."
81
+
82
+ # Load model
83
+ model = AutoAWQForCausalLM.from_quantized(model_path,
84
+ fuse_layers=True)
85
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
86
+ trust_remote_code=True)
87
+ streamer = TextStreamer(tokenizer,
88
+ skip_prompt=True,
89
+ skip_special_tokens=True)
90
+
91
+ # Convert prompt to tokens
92
+ prompt_template = """\
93
+ <|im_start|>system
94
+ {system_message}<|im_end|>
95
+ <|im_start|>user
96
+ {prompt}<|im_end|>
97
+ <|im_start|>assistant"""
98
+
99
+ prompt = "You're standing on the surface of the Earth. "\
100
+ "You walk one mile south, one mile west and one mile north. "\
101
+ "You end up exactly where you started. Where are you?"
102
+
103
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
104
+ return_tensors='pt').input_ids.cuda()
105
+
106
+ # Generate output
107
+ generation_output = model.generate(tokens,
108
+ streamer=streamer,
109
+ max_new_tokens=512)
110
+
111
+ ```
112
+
113
+ ### About AWQ
114
+
115
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
116
+
117
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
118
+
119
+ It is supported by:
120
+
121
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
122
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
123
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
124
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
125
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
126
+
127
+ ## Prompt template: ChatML
128
+
129
+ ```plaintext
130
+ <|im_start|>system
131
+ {system_message}<|im_end|>
132
+ <|im_start|>user
133
+ {prompt}<|im_end|>
134
+ <|im_start|>assistant
135
+ ```