Cannot load model due to invalid format
So far, I've not been able to load this, although I really appreciate having an iMat variant so early.
However, I could not load this in any way in latest Kobold release with support for IQ quants, or the latest Gpt4All which i sometimes use to test loading gguf models.
Tried iq2_xs and iq3_xss, none seem to be loadable as gguf.
Hey @zebrox, support for this model was added to llama.cpp 17 hours ago so this can only run using this commit:
https://github.com/ggerganov/llama.cpp/commit/12247f4c69a173b9482f68aaa174ec37fc909ccf
Hey @zebrox, support for this model was added to llama.cpp 17 hours ago so this can only run using this commit:
https://github.com/ggerganov/llama.cpp/commit/12247f4c69a173b9482f68aaa174ec37fc909ccf
Thank you!! I didn't realize, this is very fresh then. Amazing that you got imat in such a time frame out :)