Apply for community grant: Academic project (gpu)
Hello, I'm Xueqing Wu from UCLA, and this space is for our arxiv paper https://arxiv.org/abs/2406.13444. I have received a L4 gpu a few weeks ago. (Thank you again!) Sorry to make another request, but is it possible to set the sleeping time to never? It seems I cannot edit the settings any more.
@xqwu
Well, we don't set the sleep time of granted Spaces longer than 1 hour when the granted hardware is a normal GPU so we can reduce our infra cost. It's not that we own the hardwares for Spaces, and we are paying for it to other service providers like AWS, so running Spaces without stopping is too expensive for us as well. We hope you understand.
BTW, would it be possible to migrate this Space to ZeroGPU? In the case of ZeroGPU, the sleep time is dynamic, meaning it depends on some factors, but usually it's longer than 1 hour, so the UX should be better on Zero.
Ah, I just noticed that you left a comment here as well. (Although I have a special privilege to see private Spaces on the Hub, mentions from private Spaces don't appear in my Hub inbox, so there's no way for me to notice it.)
Hi, sorry to make another request, but is it possible to set the sleeping time to never? It's taking too long to rebuild the space every time, and I cannot edit the settings any more @hysts
I think one of the reason your Space takes long to start is because some models are included in the docker image of your Space. https://huggingface.co/spaces/VDebugger/VDebugger-generalist-for-VQA/blob/e20ef71e9912b6200399c13f7321f5dfdfd9b41c/Dockerfile#L36-L43
It's usually a lot faster to download models at startup using the huggingface_hub
library (docs) because hf_transfer
parallelizes the download process, while docker pull
is not parallelized. So changing that would speed up the startup process.