Request model
Hello mradermacher,
Request model gguf for EpistemeAI2/Fireball-12B-v1.2
queued!
Request model EpistemeAI2/Fireball-12B-v1.13a-philosophers
done!
request model: EpistemeAI2/Athena-codegemma-2-9b-it
queued
request model: EpistemeAI2/Fireball-Alpaca-Llama3.1-8B-Philos
Thank you so much
queued
Thanks
request models please:
EpistemeAI/Fireball-Mistral-Nemo-Instruct-24B-merge-v1
EpistemeAI2/Fireball-Mistral-Nemo-evol-Instruct-24B
queued!
Moved from: EpistemeAI2/Fireball-Mistral-Nemo-evol-Instruct-24B
To:
EpistemeAI2/Fireball-Mistral-Nemo-evol-Instruct-14BMoved from: EpistemeAI/Fireball-Mistral-Nemo-Instruct-24B-merge-v1
To:
EpistemeAI/Fireball-Mistral-Nemo-Instruct-14B-merge-v1
Request for gguf -
EpistemeAI/Athena-codegemma-2-2b-it
EpistemeAI/Athena-gemma-2-2b-it-Philos
EpistemeAI2/Fireball-Mistral-Nemo-12B-Philos
queued!
thanks
I changed name from EpistemeAI/Athena-codegemma-2-2b-it, to -> EpistemeAI/Athena-gemma-2-2b-it
Please change the your gguf to EpistemeAI/Athena-gemma-2-2b-it
Request model: EpistemeAI2/Mathball-Alpaca-Llama3.1-8B-Philos
Queued!
Thank you so much
Request all gguf. EpistemeAI2/Fireball-Alpaca-Llama3.1.07-8B-Philos-Math
queued :)
Thanks. :)
Request all gguf for EpistemeAI2/Fireball-Alpaca-Llama3.1.08-8B-Philos-C-R1. Thanks you advance
queued!
Thank you so much!
you are always welcome :)
Request GGUF for nvidia/NV-Embed-v2. Thanks in advance
Request for gguf -
Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4
@ayishfeng I need the source model, not another quant. I assume you meant https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct
@ayishfeng It's queued, hope it works.
You can watch the progress of the models at http://hf.tst.eu/status.html
@ayishfeng @nitishraj neither of these models are supported by llama.cpp at this time