Spaces:
Running
on
CPU Upgrade
GPTQ-Models in this Leaderboard
Regarding: @clefourrier "Re: the GPTQ models, would you be so kind as to open a new issue so I can check this tomorrow"
It appears that there are several GPTQ quantized models in this leaderboards queue.
I don't know if this leaderboard is capable of running GPTQ-models or not. If it doesn't, I would suggest adding AutoGPTQ https://github.com/PanQiWei/AutoGPTQ to it, as it builds on top of transformers and works with all models transformers supports.
If that is not an option or wanted, then I would suggest removing the GPTQ-Models from the queue and adding a mechanism that would prevent submitting these models, as they clog up the queue and make it a mess.
I would also recommend, if you decide to evaluate GPTQ models, that a new column or any other indication be added that states whether this is a fp16 or a GPTQ model (Or anything in between). One would have to make that clear because results can vary by a significant margin between 4bit and fp16.
Thank you very much
Hi! Thank you for opening this issue.
Just checked, and you are right, they won't work with the leaderboard out of the box: these models cannot be launched with AutoModel.from_pretrained()
(which is a prerequisite for submission, as indicated in the About
tab).
We don't plan on extending the leaderboard to other libraries at the moment, as we have other priorities, but thank you for the reference.
These models will have almost no impact on the queue as they will fail in less than 2 min, so we won't add a mechanism to reject them (as filtering on the name could reject other models accidentally, and we won't preload models in the submission box).
We will add a column about precision type anyway, because it's an important information that we need.
Thanks for the time and info you shared! 🤗
Hi
@Wubbbi
!
We now have GPTQ support! :)