Triangle104 commited on
Commit
8cf9335
1 Parent(s): 1c855f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md CHANGED
@@ -106,6 +106,72 @@ model-index:
106
  This model was converted to GGUF format from [`Darkknight535/OpenCrystal-12B-L3`](https://huggingface.co/Darkknight535/OpenCrystal-12B-L3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
107
  Refer to the [original model card](https://huggingface.co/Darkknight535/OpenCrystal-12B-L3) for more details on the model.
108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  ## Use with llama.cpp
110
  Install llama.cpp through brew (works on Mac and Linux)
111
 
 
106
  This model was converted to GGUF format from [`Darkknight535/OpenCrystal-12B-L3`](https://huggingface.co/Darkknight535/OpenCrystal-12B-L3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
107
  Refer to the [original model card](https://huggingface.co/Darkknight535/OpenCrystal-12B-L3) for more details on the model.
108
 
109
+ ---
110
+ Model details:
111
+ -
112
+ OpenCrystal-12B-L3
113
+
114
+ This is a finetuned language model. (I recommend using this one v2 and v2.1 are not good enough)
115
+
116
+ Rohma
117
+ 128K??
118
+
119
+ L3.1 Variant here
120
+ Instruct Template
121
+
122
+ Default llama3 instruct and context preset, but here is the one i use. Instruct Context
123
+ Samplers
124
+ Creative
125
+
126
+ Temp : 1.23
127
+ Min P : 0.05
128
+ Repetition Penalty : 1.05
129
+
130
+ [And everything else neutral]
131
+
132
+ Normal
133
+
134
+ Temp : 0.6 - 0.8
135
+ Min P : 0.1
136
+ Repetition Penalty : 1.1
137
+
138
+ [And everything else neutral]
139
+
140
+ Pro Tip
141
+
142
+ You can uncheck Include Names option in sillytavern, to force it to speak as others dynamically. Not Recommended
143
+
144
+ Features
145
+
146
+ Can speak as other npc automatically.
147
+ Creative (Swipes are crazy.)
148
+ Coherent (Sometime gets horny)
149
+ Output feels like you're using Character.ai
150
+ Follows prompt better
151
+ Likes higher context length. (12K easily tested)
152
+ can summarize and generate image prompts well [The Above image's prompt is generated in a roleplay by this model] (Possible : Due to llama-3-instruct as base)
153
+
154
+ Instruct Prompt
155
+
156
+ You're {{char}}, follow {{char}} personality and plot of the story, Don't impersonate as {{user}}, Speak as others NPC except {{user}} when needed. Be Creative, Create various interesting events and situations during the story.
157
+
158
+ FeedBack
159
+
160
+ FeedBack here
161
+ Open LLM Leaderboard Evaluation Results
162
+
163
+ Detailed results can be found here
164
+ Metric Value
165
+ Avg. 20.51
166
+ IFEval (0-Shot) 40.71
167
+ BBH (3-Shot) 31.84
168
+ MATH Lvl 5 (4-Shot) 7.93
169
+ GPQA (0-shot) 7.49
170
+ MuSR (0-shot) 5.74
171
+ MMLU-PRO (5-shot) 29.34
172
+
173
+ ---
174
+
175
  ## Use with llama.cpp
176
  Install llama.cpp through brew (works on Mac and Linux)
177