How long does this approval process take?

#10
by mitkox - opened

How long does this approval process take?

Do you also see "Your request to access this repo has been successfully submitted, and is pending a review from the repo's authors." in the Files and versions tab, but you get a "We can't find that page." when you try to acknowledge the license in the model card?

Yes exactly the same loop and can't access the files

Do you also see "Your request to access this repo has been successfully submitted, and is pending a review from the repo's authors." in the Files and versions tab, but you get a "We can't find that page." when you try to acknowledge the license in the model card?

Was exactly staring at that lol

Okay cool, nothing we can do then let's wait and see :)

Kaggle approvals take between one day to two. With Llama, it was the same; you accepted the consent on Kaggle and within hours, you were allowed access. Anyway, you can download from Kaggle upon accepting the license! As a plan B. (You select the version and next to "new notebook" download the model (+25GB gemma 7b).)
image.png

Okay cool, nothing we can do then let's wait and see :)

I just figured it out, you need to click the consent on the model card page with a jump over license page on kaggle, then you could access the files

Just got accepted!

Just got accepted!

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co/nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull). Hopefully your 7B-IT will have a successful one

Google org

Hey all,we had a bug during the launch! You should get immediate access now after going through the accept flow.

osanseviero changed discussion status to closed
Google org

Just got accepted!

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co/nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull). Hopefully your 7B-IT will have a successful one

This depends on how your conversion is done. Two things to make sure: 1) the GGUF arch must be gemma and 2) there is no output weight in this arch because it shares the same embedding weights as the input layer. The error you see suggests that the arch is likely not set / copied correctly by the converter.

Just got accepted!

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co/nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull). Hopefully your 7B-IT will have a successful one

This depends on how your conversion is done. Two things to make sure: 1) the GGUF arch must be gemma and 2) there is no output weight in this arch because it shares the same embedding weights as the input layer. The error you see suggests that the arch is likely not set / copied correctly by the converter.

Thx !

Sign up or log in to comment