---
base_model: qblocks/mistral_7b_norobots
datasets:
- HuggingFaceH4/no_robots
inference: false
library_name: peft
license: apache-2.0
model_creator: MonsterAPI
model_name: Mistral 7B Norobots
model_type: mistral
prompt_template: '<|system|> <|user|> {prompt} <|assistant|> {{response}}
'
quantized_by: TheBloke
tags:
- code
- instruct
- llama2
---
# Mistral 7B Norobots - FP16
- Model creator: [MonsterAPI](https://huggingface.co/qblocks)
- Original model: [Mistral 7B Norobots](qblocks/mistral_7b_norobots)
## Description
This repo contains pytorch format fp16 model files for [MonsterAPI's Mistral 7B Norobots](qblocks/mistral_7b_norobots).
It is the result of either merging a LoRA, or converting the source repository to float16.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mistral_7b_norobots-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/mistral_7b_norobots-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/mistral_7b_norobots-GGUF)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/mistral_7b_norobots-fp16)
* [MonsterAPI's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/qblocks/mistral_7b_norobots)
## Prompt template: NoRobots
```
<|system|> <|user|> {prompt} <|assistant|> {{response}}
```
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
# Original model card: MonsterAPI's Mistral 7B Norobots
### Finetuning Overview:
**Model Used:** mistralai/Mistral-7B-v0.1
**Dataset:** HuggingFaceH4/no_robots
#### Dataset Insights:
[No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 36mins 27secs for 1 epoch using an A6000 48GB GPU.
- Costed `$1.212` for the entire epoch.
#### Hyperparameters & Additional Details:
- **Epochs:** 1
- **Cost Per Epoch:** $1.212
- **Total Finetuning Cost:** $1.212
- **Model Path:** mistralai/Mistral-7B-v0.1
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 4
- **lora r:** 32
- **lora alpha:** 64
#### Prompt Structure
```
<|system|> <|user|> [USER PROMPT] <|assistant|> [ASSISTANT ANSWER]
```
#### Train loss :
![eval loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/WDbw92-Vmuc7QttRHvJU6.png)
license: apache-2.0