TheBloke commited on
Commit
e6f0848
1 Parent(s): 8e4e09c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -100
README.md CHANGED
@@ -44,30 +44,21 @@ quantized_by: TheBloke
44
 
45
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
46
 
47
- <!-- description end -->
48
- <!-- README_GGUF.md-about-gguf start -->
49
- ### About GGUF
50
 
51
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
52
 
53
- Here is an incomplete list of clients and libraries that are known to support GGUF:
54
 
55
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
56
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
57
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
58
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
59
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
60
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
61
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
62
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
63
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
64
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
65
 
66
- <!-- README_GGUF.md-about-gguf end -->
67
  <!-- repositories-available start -->
68
  ## Repositories available
69
 
70
- * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mixtral-8x7b-v0.1-AWQ)
71
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ)
72
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF)
73
  * [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
@@ -78,18 +69,12 @@ Here is an incomplete list of clients and libraries that are known to support GG
78
 
79
  ```
80
  {prompt}
81
-
82
  ```
83
 
84
  <!-- prompt-template end -->
85
 
86
 
87
  <!-- compatibility_gguf start -->
88
- ## Compatibility
89
-
90
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
91
-
92
- They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
93
 
94
  ## Explanation of quantisation methods
95
 
@@ -140,11 +125,6 @@ The following clients/libraries will automatically download models for you, prov
140
  * LoLLMS Web UI
141
  * Faraday.dev
142
 
143
- ### In `text-generation-webui`
144
-
145
- Under Download Model, you can enter the model repo: TheBloke/Mixtral-8x7B-v0.1-GGUF and below it, a specific filename to download, such as: mixtral-8x7b-v0.1.Q4_K_M.gguf.
146
-
147
- Then click Download.
148
 
149
  ### On the command line, including multiple files at once
150
 
@@ -206,82 +186,12 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
206
 
207
  ## How to run in `text-generation-webui`
208
 
209
- Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
210
 
211
  ## How to run from Python code
212
 
213
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
214
-
215
- ### How to load this model in Python code, using llama-cpp-python
216
-
217
- For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
218
-
219
- #### First install the package
220
-
221
- Run one of the following commands, according to your system:
222
-
223
- ```shell
224
- # Base ctransformers with no GPU acceleration
225
- pip install llama-cpp-python
226
- # With NVidia CUDA acceleration
227
- CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
228
- # Or with OpenBLAS acceleration
229
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
230
- # Or with CLBLast acceleration
231
- CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
232
- # Or with AMD ROCm GPU acceleration (Linux only)
233
- CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
234
- # Or with Metal GPU acceleration for macOS systems only
235
- CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
236
-
237
- # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
238
- $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
239
- pip install llama-cpp-python
240
- ```
241
-
242
- #### Simple llama-cpp-python example code
243
-
244
- ```python
245
- from llama_cpp import Llama
246
-
247
- # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
248
- llm = Llama(
249
- model_path="./mixtral-8x7b-v0.1.Q4_K_M.gguf", # Download the model file first
250
- n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
251
- n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
252
- n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
253
- )
254
-
255
- # Simple inference example
256
- output = llm(
257
- "{prompt}", # Prompt
258
- max_tokens=512, # Generate up to 512 tokens
259
- stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
260
- echo=True # Whether to echo the prompt
261
- )
262
-
263
- # Chat Completion API
264
-
265
- llm = Llama(model_path="./mixtral-8x7b-v0.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
266
- llm.create_chat_completion(
267
- messages = [
268
- {"role": "system", "content": "You are a story writing assistant."},
269
- {
270
- "role": "user",
271
- "content": "Write a story about llamas."
272
- }
273
- ]
274
- )
275
- ```
276
-
277
- ## How to use with LangChain
278
-
279
- Here are guides on using llama-cpp-python and ctransformers with LangChain:
280
-
281
- * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
282
- * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
283
 
284
- <!-- README_GGUF.md-how-to-run end -->
285
 
286
  <!-- footer start -->
287
  <!-- 200823 -->
 
44
 
45
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
46
 
47
+ ## EXPERIMENTAL - REQUIRES LLAMA.CPP PR
48
+
49
+ These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
50
 
51
+ THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
52
 
53
+ To test these GGUFs, please build llama.cpp from the above PR.
54
 
55
+ I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
56
+ <!-- description end -->
 
 
 
 
 
 
 
 
57
 
 
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
61
+ * AWQ coming soon
62
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ)
63
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF)
64
  * [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
 
69
 
70
  ```
71
  {prompt}
 
72
  ```
73
 
74
  <!-- prompt-template end -->
75
 
76
 
77
  <!-- compatibility_gguf start -->
 
 
 
 
 
78
 
79
  ## Explanation of quantisation methods
80
 
 
125
  * LoLLMS Web UI
126
  * Faraday.dev
127
 
 
 
 
 
 
128
 
129
  ### On the command line, including multiple files at once
130
 
 
186
 
187
  ## How to run in `text-generation-webui`
188
 
189
+ Not supported yet
190
 
191
  ## How to run from Python code
192
 
193
+ Not supported yet
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
194
 
 
195
 
196
  <!-- footer start -->
197
  <!-- 200823 -->