Suparious commited on
Commit
4ea11b8
1 Parent(s): b4fa5bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -2
README.md CHANGED
@@ -1,5 +1,20 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
 
 
 
3
  tags:
4
  - 4-bit
5
  - AWQ
@@ -10,6 +25,21 @@ pipeline_tag: text-generation
10
  inference: false
11
  quantized_by: Suparious
12
  ---
13
- #
14
 
15
- **UPLOAD IN PROGRESS**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - ResplendentAI/Paradigm_7B
4
+ - jeiku/selfbot_256_mistral
5
+ - ResplendentAI/Paradigm_7B
6
+ - jeiku/Theory_of_Mind_Mistral
7
+ - ResplendentAI/Paradigm_7B
8
+ - jeiku/Alpaca_NSFW_Shuffled_Mistral
9
+ - ResplendentAI/Paradigm_7B
10
+ - ResplendentAI/Paradigm_7B
11
+ - jeiku/Luna_LoRA_Mistral
12
+ - ResplendentAI/Paradigm_7B
13
+ - jeiku/Re-Host_Limarp_Mistral
14
  library_name: transformers
15
+ license: apache-2.0
16
+ language:
17
+ - en
18
  tags:
19
  - 4-bit
20
  - AWQ
 
25
  inference: false
26
  quantized_by: Suparious
27
  ---
28
+ # ResplendentAI/Aura_v3_7B AWQ
29
 
30
+ - Model creator: [ResplendentAI](https://huggingface.co/ResplendentAI)
31
+ - Original model: [Aura_v3_7B](https://huggingface.co/ResplendentAI/Aura_v3_7B)
32
+
33
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/V_DYIcPMJ5_ijanQW_ap2.png)
34
+
35
+ ## Model Summary
36
+
37
+ Aura v3 is an improvement with a significantly more steerable writing style. Out of the box it will prefer poetic prose, but if instructed, it can adopt a more approachable style. This iteration has erotica, RP data and NSFW pairs to provide a more compliant mindset.
38
+
39
+ I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
40
+
41
+ If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
42
+
43
+ This model responds best to ChatML for multiturn conversations.
44
+
45
+ This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.