Demo inference speed

#3
by maor121 - opened

Hi,

First I want to thank you for this work, the model performance is quite impressive.
Secondly, I am trying to setup it on my machine, CPU i7, 32GB RAM, NVIDIA GTX 1080 Ti (11GB RAM).

Using GPU, 1 infenrece takes around 20-30 seconds.
However I noticed on your demo here: https://huggingface.co/spaces/dicta-il/dictalm2.0-instruct-demo
Inference takes < 1 second, and at the top it says the demo runs on CPU & RAM only (no GPU).
I have tried with CPU as well, but it is significantly slower then on GPU.

How is the demo able to achieve such speed? What am I missing?
I thought maybe the demo uses the quantized version, but it has a link leading here, to the full precision, not quantized model.

Thanks

DICTA: The Israel Center for Text Analysis org

The model is loaded on a separate server, the demo just sends a request to the server which is why it only requires a CPU.
For running on a GTX 1080 I recommend using the AWQ quantized model here which will fit on the 11GB of VRAM.

Sign up or log in to comment