File size: 3,943 Bytes
fa01e6f
914bdd0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fa01e6f
cb2c484
914bdd0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fa01e6f
914bdd0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27a6961
 
914bdd0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
tags:
- finetuned
- quantized
- 4-bit
- AWQ
- transformers
- pytorch
- mistral
- instruct
- text-generation
- conversational
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- finetune
- chatml
- generated_from_trainer
model-index:
  - name: Senzu-7B-v0.1-DPO
    results: []
license: apache-2.0
base_model: NeuralNovel/Senzu-7B-v0.1
datasets:
  - practical-dreamer/RPGPT_PublicDomain-alpaca
  - shuyuej/metamath_gsm8k
  - NeuralNovel/Neural-DPO
language:
  - en
quantized_by: Suparious
pipeline_tag: text-generation
model_creator: NeuralNovel
model_name: Senzu 7B 0.1 DPO
inference: false
prompt_template: '<|im_start|>system

  {system_message}<|im_end|>

  <|im_start|>user

  {prompt}<|im_end|>

  <|im_start|>assistant

  '
---

# NeuralNovel/Senzu-7B-v0.1-DPO

- Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel)
- Original model: [Senzu-7B-v0.1](https://huggingface.co/NeuralNovel/Senzu-7B-v0.1)

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/FXt-g2q8JE-l77_gp23T3.jpeg)

## Model Details

This model is Senzu-7B-v0.1 a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) 

DPO Trained on the Neural-DPO dataset. 

Trained on the Neural-DPO
This model excels at character roleplay, also with the ability of responding accurately to a wide variety of complex questions.

## How to use

### Install the necessary packages

```bash
pip install --upgrade autoawq autoawq-kernels
```

### Example Python code

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/Senzu-7B-v0.1-DPO-AWQ"
system_message = "You are Senzu, incarnated as a powerful AI."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

```

### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code

## Prompt template: ChatML

```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```