Update README.md
Browse files
README.md
CHANGED
@@ -138,6 +138,12 @@ Pulsar_7B is a fine-tune of [MTSAIR/multi_verse_model](https://huggingface.co/MT
|
|
138 |
## Quantizations
|
139 |
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Pulsar_7B-GGUF).
|
140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
141 |
---
|
142 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
143 |
|
|
|
138 |
## Quantizations
|
139 |
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Pulsar_7B-GGUF).
|
140 |
|
141 |
+
## Formatting
|
142 |
+
Pulsar_7B works well with Alpaca, it's not a picky model when it comes to formatting. Mistral should be compatible too. The custom chat template from [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) also performs well:
|
143 |
+
```
|
144 |
+
{% for message in messages %}{% if message['role'] == 'user' %}{{ '### Instruction:\n' + message['content'] + '\n### Response:\n' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% elif message['role'] == 'system' %}{{ '### System:\n' + message['content'] + '\n' }}{% endif %}{% endfor %}
|
145 |
+
```
|
146 |
+
|
147 |
---
|
148 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
149 |
|