Cannot Load Model - Gemma 27B
Hi there,
I am loading the model but occured an issue. Could you help to check how I can solve this? Thanks!
{
"title": "Failed to load model",
"cause": "",
"errorData": {
"n_ctx": 2048,
"n_batch": 512,
"n_gpu_layers": 20
},
"data": {
"memory": {
"ram_capacity": "127.77 GB",
"ram_unused": "114.12 GB"
},
"gpu": {
"gpu_names": [
"NVIDIA GeForce RTX 4070"
],
"vram_recommended_capacity": "11.99 GB",
"vram_unused": "10.85 GB"
},
"os": {
"platform": "win32",
"version": "10.0.22631"
},
"app": {
"version": "0.2.31",
"downloadsDir": "C:\\Users\\Kar\\.cache\\lm-studio\\models\\"
},
"model": {}
}
}```
Which size are you attempting to load?
Hi there, I am trying load the Q_8 and F32 models. They are 30 GB and 108 GB.
Hmm 20 layers of Q8 probably can't fit on your 4070.. can you try offloading fewer layers or using a smaller size?
Hi, yes I am happy to! I am a bit new to this. May I know how to offloading fewer layers? Also, I plan to replace 4070 with 3090 24GB. Do you think that might be better?
yes that'll be better for running larger models
there's a slider in LM Studio on the right hand side that you should be able to drag lower
Thanks for the suggestion!
I am now using the gpt4all, w/o GPU accelerator. I want to try LM studio, but for unknown reasons, I cannot run both models on LM studio, which is super wired.. I had my new windows 11 home edition installed yesterday, and there is barely no softwares installed (so assume no conflict from software perspective). Do you know what could be possible reasons that I cannot run in LM studio?