Can't run the latest GGMLs? Check branch previous_llama for GGMLs compatible with older llama.cpp and UIs.
#3
by
TheBloke
- opened
I just added GGMLs compatible with the old llama.cpp quantisation method. You can find them in branch previous_llama
So if you're unable to use the latest files - eg because you're using text-generation-webui or some other UI that hasn't updated yet - you can now use the files in that branch.
Then when your UI updates, choose a GGML from the main
branch instead.
Or use this to use the models in the new format: https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML/discussions/1#64624c5a904bbc4cf2df8122
It's now been added to mainline text-generation-webui so I will close this issue
TheBloke
changed discussion status to
closed