[Cache Request] meta-llama/Meta-Llama-3-8B

#141
by aravindrnair - opened

Please add the following model to the neuron cache

AWS Inferentia and Trainium org

This model is already cached. You can try it out using this link: https://huggingface.co/meta-llama/Meta-Llama-3-8B?sagemaker_deploy=true (select Inferentia as target)

dacorvo changed discussion status to closed

Sign up or log in to comment