Aura_v3_7B-AWQ / README.md
Suparious's picture
Updated and moved existing to merged_models base_model tag in README.md
3fd1298 verified
metadata
base_model: ResplendentAI/Aura_v3_7B
inference: false
language:
  - en
library_name: transformers
license: apache-2.0
merged_models:
  - ResplendentAI/Paradigm_7B
  - jeiku/selfbot_256_mistral
  - ResplendentAI/Paradigm_7B
  - jeiku/Theory_of_Mind_Mistral
  - ResplendentAI/Paradigm_7B
  - jeiku/Alpaca_NSFW_Shuffled_Mistral
  - ResplendentAI/Paradigm_7B
  - ResplendentAI/Paradigm_7B
  - jeiku/Luna_LoRA_Mistral
  - ResplendentAI/Paradigm_7B
  - jeiku/Re-Host_Limarp_Mistral
pipeline_tag: text-generation
quantized_by: Suparious
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible

ResplendentAI/Aura_v3_7B AWQ

image/png

Model Summary

Aura v3 is an improvement with a significantly more steerable writing style. Out of the box it will prefer poetic prose, but if instructed, it can adopt a more approachable style. This iteration has erotica, RP data and NSFW pairs to provide a more compliant mindset.

I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.

If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.

This model responds best to ChatML for multiturn conversations.

This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.