[Cache Request] quilr-ai/semantic-dlp
#63
by
ksquarekumar
- opened
Please add the following model to the neuron cache
What's the id of the model on the hub? I could not find it.
It's private, will that affect the caching of the model ?
Apologies for the confusion, what is the process for private models if I may ask, can you point me to it if it exists instead, If its not a supported use case ? it's a llama 2 autotuned model
@ksquarekumar you cannot cache private model in the public cache. If you are using a public model architecture (with the exact same configuration, but different weights), then you can use the public cached version.