gpt4-x-alpaca-13b-native-4bit-128g / gpt4-x-alpaca-13b-ggml-q4_1-from-gptq-4bit-128g
anon8231489123's picture
added ggml quantization for cuda model
4ef20dd