TheBloke commited on
Commit
62cbb30
1 Parent(s): 6053f6b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -40
README.md CHANGED
@@ -47,20 +47,17 @@ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is
47
 
48
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
49
 
50
- As of August 24th 2023, llama.cpp and KoboldCpp support GGUF. Other third-party clients and libraries are expected to add support very soon.
51
-
52
- Here is a list of clients and libraries that are known to support GGUF:
53
- * [llama.cpp](https://github.com/ggerganov/llama.cpp)
54
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41!
55
-
56
- Here is a list of clients and libraries, along with their expected timeline for GGUF support. Where possible a link to the relevant issue or PR is provided:
57
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), awaiting llama-cpp-python support.
58
- * [LM Studio](https://lmstudio.ai/), in active development - hoped to be ready by August 25th-26th.
59
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), will work as soon as ctransformers or llama-cpp-python is updated.
60
- * [ctransformers](https://github.com/marella/ctransformers), [development will start soon](https://github.com/marella/ctransformers/issues/102).
61
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [in active development](https://github.com/abetlen/llama-cpp-python/issues/628).
62
- <!-- README_GGUF.md-about-gguf end -->
63
 
 
64
  <!-- repositories-available start -->
65
  ## Repositories available
66
 
@@ -78,6 +75,7 @@ Here is a list of clients and libraries, along with their expected timeline for
78
  {prompt}
79
 
80
  ### RESPONSE:
 
81
  ```
82
 
83
  <!-- prompt-template end -->
@@ -86,9 +84,7 @@ Here is a list of clients and libraries, along with their expected timeline for
86
 
87
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
88
 
89
- As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.
90
-
91
- They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.
92
 
93
  ## Explanation of quantisation methods
94
  <details>
@@ -110,16 +106,22 @@ Refer to the Provided Files table below to see what files use which methods, and
110
 
111
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
112
  | ---- | ---- | ---- | ---- | ---- | ----- |
113
- | [nous-puffin-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q2_K.gguf) | Q2_K | 2 | 29.11 GB| 31.61 GB | smallest, significant quality loss - not recommended for most purposes |
114
- | [nous-puffin-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.75 GB| 32.25 GB | very small, high quality loss |
115
- | [nous-puffin-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.10 GB| 35.60 GB | very small, high quality loss |
 
116
  | [nous-puffin-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
117
- | [nous-puffin-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 38.99 GB| 41.49 GB | small, greater quality loss |
118
- | [nous-puffin-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.38 GB| 43.88 GB | medium, balanced quality - recommended |
 
 
 
 
 
119
  | [nous-puffin-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
120
  | [nous-puffin-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
121
- | nous-puffin-70b.Q6_K.gguf | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
122
- | nous-puffin-70b.Q8_0.gguf | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
123
 
124
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
125
 
@@ -157,25 +159,23 @@ del nous-puffin-70b.Q8_0.gguf-split-a nous-puffin-70b.Q8_0.gguf-split-b
157
  ```
158
 
159
  </details>
160
-
161
-
162
  <!-- README_GGUF.md-provided-files end -->
163
 
164
  <!-- README_GGUF.md-how-to-run start -->
165
- ## How to run in `llama.cpp`
166
 
167
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
168
 
169
- For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
170
 
171
  ```
172
- ./main -t 10 -ngl 32 -m nous-puffin-70b.q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
173
  ```
174
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
175
 
176
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
177
 
178
- Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically. If they are not, or if you need to change them manually, you can use `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
179
 
180
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
181
 
@@ -184,6 +184,44 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
184
  ## How to run in `text-generation-webui`
185
 
186
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
187
  <!-- README_GGUF.md-how-to-run end -->
188
 
189
  <!-- footer start -->
@@ -209,7 +247,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
209
 
210
  **Special thanks to**: Aemon Algiz.
211
 
212
- **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
213
 
214
 
215
  Thank you to all my generous patrons and donaters!
@@ -236,7 +274,7 @@ Special thank you to Emozilla for assisting with training experimentations and b
236
 
237
  ## Model Training
238
 
239
- Redmond-Puffin 70B is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
240
 
241
  Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
242
 
@@ -262,7 +300,7 @@ Optional reccomended pre-prompt / system prompt:
262
  Although full benchmarks have not completed for Puffin,
263
  Original Puffin 13B and Hermes-2 13B both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
264
 
265
- Overall, for general purpose zero-shot and/or single turn instructions, Hermes will likely be the way to go. Puffin may be prefferred for creative long conversation interactions, like having Puffin play a character or help brain storm creative ideas or concepts that make contextual sense within an already deep conversation.
266
 
267
  Thank you to the comprehensive analysis and comparison of Puffin and Hermes by reddit user WolframRavenwolf here: https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/
268
 
@@ -270,13 +308,13 @@ Thank you to the comprehensive analysis and comparison of Puffin and Hermes by r
270
 
271
  ![puffin](https://i.imgur.com/P0MsN8B.png)
272
 
273
- ![puffin](https://i.imgur.com/8EO3ThV.png)
274
 
275
- ![puffin](https://i.imgur.com/5IWolFw.png)
276
 
277
- ![puffin](https://i.imgur.com/TQui8m7.png)
278
 
279
- ![puffin](https://i.imgur.com/tderIfl.png)
280
 
281
  ## Notable Features:
282
 
@@ -292,13 +330,13 @@ Thank you to the comprehensive analysis and comparison of Puffin and Hermes by r
292
 
293
  ## Future Plans
294
 
295
- This is a relatively early build amongst the grand plans for the future of Puffin!
296
 
297
  Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
298
 
299
  ## How you can help!
300
 
301
- In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
302
 
303
  If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
304
 
@@ -309,7 +347,7 @@ As of Puffins release, it achieves a new SOTA for the GPT4All benchmarks! Suppla
309
 
310
  Previous Sota: Hermes - 68.8
311
  New Sota: Puffin - 69.9 (+1.1)
312
-
313
  Puffin 13B supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
314
 
315
  Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
 
47
 
48
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
49
 
50
+ Here are a list of clients and libraries that are known to support GGUF:
51
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp).
52
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
53
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
54
+ * [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
55
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
56
+ * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
57
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
58
+ * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
 
 
 
 
59
 
60
+ <!-- README_GGUF.md-about-gguf end -->
61
  <!-- repositories-available start -->
62
  ## Repositories available
63
 
 
75
  {prompt}
76
 
77
  ### RESPONSE:
78
+
79
  ```
80
 
81
  <!-- prompt-template end -->
 
84
 
85
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
86
 
87
+ They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
 
 
88
 
89
  ## Explanation of quantisation methods
90
  <details>
 
106
 
107
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
108
  | ---- | ---- | ---- | ---- | ---- | ----- |
109
+ | [nous-puffin-70b.Q6_K.gguf-split-b](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q6_K.gguf-split-b) | Q6_K | 6 | 19.89 GB| 22.39 GB | very large, extremely low quality loss |
110
+ | [nous-puffin-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
111
+ | [nous-puffin-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
112
+ | [nous-puffin-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
113
  | [nous-puffin-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
114
+ | [nous-puffin-70b.Q8_0.gguf-split-b](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q8_0.gguf-split-b) | Q8_0 | 8 | 36.59 GB| 39.09 GB | very large, extremely low quality loss - not recommended |
115
+ | [nous-puffin-70b.Q6_K.gguf-split-a](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q6_K.gguf-split-a) | Q6_K | 6 | 36.70 GB| 39.20 GB | very large, extremely low quality loss |
116
+ | [nous-puffin-70b.Q8_0.gguf-split-a](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q8_0.gguf-split-a) | Q8_0 | 8 | 36.70 GB| 39.20 GB | very large, extremely low quality loss - not recommended |
117
+ | [nous-puffin-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
118
+ | [nous-puffin-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
119
+ | [nous-puffin-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
120
+ | [nous-puffin-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
121
  | [nous-puffin-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
122
  | [nous-puffin-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
123
+ | nous-puffin-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
124
+ | nous-puffin-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
125
 
126
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
127
 
 
159
  ```
160
 
161
  </details>
 
 
162
  <!-- README_GGUF.md-provided-files end -->
163
 
164
  <!-- README_GGUF.md-how-to-run start -->
165
+ ## Example `llama.cpp` command
166
 
167
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
168
 
169
+ For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
170
 
171
  ```
172
+ ./main -t 10 -ngl 32 -m nous-puffin-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### HUMAN:\n{prompt}\n\n### RESPONSE:"
173
  ```
174
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
175
 
176
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
177
 
178
+ Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
179
 
180
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
181
 
 
184
  ## How to run in `text-generation-webui`
185
 
186
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
187
+
188
+ ## How to run from Python code
189
+
190
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
191
+
192
+ ### How to load this model from Python using ctransformers
193
+
194
+ #### First install the package
195
+
196
+ ```bash
197
+ # Base ctransformers with no GPU acceleration
198
+ pip install ctransformers>=0.2.24
199
+ # Or with CUDA GPU acceleration
200
+ pip install ctransformers[cuda]>=0.2.24
201
+ # Or with ROCm GPU acceleration
202
+ CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
203
+ # Or with Metal GPU acceleration for macOS systems
204
+ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
205
+ ```
206
+
207
+ #### Simple example code to load one of these GGUF models
208
+
209
+ ```python
210
+ from ctransformers import AutoModelForCausalLM
211
+
212
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
213
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/Nous-Puffin-70B-GGUF", model_file="nous-puffin-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
214
+
215
+ print(llm("AI is going to"))
216
+ ```
217
+
218
+ ## How to use with LangChain
219
+
220
+ Here's guides on using llama-cpp-python or ctransformers with LangChain:
221
+
222
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
223
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
224
+
225
  <!-- README_GGUF.md-how-to-run end -->
226
 
227
  <!-- footer start -->
 
247
 
248
  **Special thanks to**: Aemon Algiz.
249
 
250
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
251
 
252
 
253
  Thank you to all my generous patrons and donaters!
 
274
 
275
  ## Model Training
276
 
277
+ Redmond-Puffin 70B is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
278
 
279
  Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
280
 
 
300
  Although full benchmarks have not completed for Puffin,
301
  Original Puffin 13B and Hermes-2 13B both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
302
 
303
+ Overall, for general purpose zero-shot and/or single turn instructions, Hermes will likely be the way to go. Puffin may be prefferred for creative long conversation interactions, like having Puffin play a character or help brain storm creative ideas or concepts that make contextual sense within an already deep conversation.
304
 
305
  Thank you to the comprehensive analysis and comparison of Puffin and Hermes by reddit user WolframRavenwolf here: https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/
306
 
 
308
 
309
  ![puffin](https://i.imgur.com/P0MsN8B.png)
310
 
311
+ ![puffin](https://i.imgur.com/8EO3ThV.png)
312
 
313
+ ![puffin](https://i.imgur.com/5IWolFw.png)
314
 
315
+ ![puffin](https://i.imgur.com/TQui8m7.png)
316
 
317
+ ![puffin](https://i.imgur.com/tderIfl.png)
318
 
319
  ## Notable Features:
320
 
 
330
 
331
  ## Future Plans
332
 
333
+ This is a relatively early build amongst the grand plans for the future of Puffin!
334
 
335
  Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
336
 
337
  ## How you can help!
338
 
339
+ In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
340
 
341
  If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
342
 
 
347
 
348
  Previous Sota: Hermes - 68.8
349
  New Sota: Puffin - 69.9 (+1.1)
350
+
351
  Puffin 13B supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
352
 
353
  Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!