Any plan/ideas to convert this to gguf?
As title says, is there any plan to convert this to gguf?
Right, sorry I didn't notice this issue earlier but as mentioned by
@Felladrin
there is indeed already a quantized version available in candle-transformers
. You can try it out through our phi example in the candle repo by using the --quantized
flag. An example can be seen at the bottom of this readme.
@lmz
I'm getting this error when trying to load model-v1-q4k.gguf
into llama.cpp
llama_model_loader: - type f32: 171 tensors
llama_model_loader: - type q4_K: 98 tensors
error loading model: unknown model architecture: ''
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/app/embeddings.py", line 37, in <module>
llm = Llama(model_path=path.join("models", model_path, model_fname, ),
File "/app/llama_cpp/llama.py", line 323, in __init__
assert self.model is not None
AssertionError
@loretoparisi this is actually not designed to work with llama.cpp but with candle, you can see the documentation for this example here. My guess is that getting this to work with llama.cpp is likely not trivial whereas one of the design goal of candle is to make it easier to try quantization on architectures that are potentially very different from llama.
@lmz
so basically this is not a gguf
format, but a candle quantized format.