Llama 3.1 models continuously unavailable

#26
by HugoMartin - opened

I pay €9/month for access to models (particularly Llama 3.1 8B, 70B and 405B) through the Inference API.

NONE of these models have been available ("Service Unavailable") for several weeks now, despite being fully accessible at the start of my Pro subscription.

Even more concerning, my application's JSON formatting using Llama3.1-8B-Instruct was initially functioning correctly, but now the completions are subpar. They fail to produce valid JSON strings, hallucinate keys/values, or corrupt Unicode characters.

I haven't made any changes to my application, so it feels as though HuggingFace has replaced the original models with lower-precision, quantized versions.

I understand Hugging Face will do anything to force users to switch to dedicated endpoint instances ($$$), but this is UNACCEPTABLE.

Hugging Face employee here. Sorry for the difficulties. Hugging Face occasionally updates the models available to PRO subscribers based on usage. These are the models that are provided through the PRO subscription.

Can you share how you call the model for JSON formatting?

You can also search for "warm" models that can be called via the inference API. Note that the ones that are warm may not be available 24/7.

https://huggingface.co/models?inference=warm&pipeline_tag=text-generation

Is there a way to request 405 being turned back on? I bought the subscriptions almost exclusively for the API to 405

I don't think 405 is coming back. I'd recommend using fireworks.ai or together.ai for that

they have per token pricing though:/ I liked the flat rate of hugging face

405 is extremely expensive to host and it wasn't getting much usage.

Keep in mind that there are rate limits for the HF flat rate plan. There are always trade-offs for flat rate vs per token

Yeah:/ it was just nice to have an expected price every month. I'm afraid of accidentally looping something and charging my account a bajillion dollars. Is there somewhere I can submit a request to get it re-added? I know it's a long shot but might as well try right? 70B is just not contextual enough for me. Like I ask for the json format {"request_format": "response"} and it sends "request type" which messes up the whole thing

Is there somewhere I can submit a request to get it re-added?

Not at the moment.

70B is just not contextual enough for me. Like I ask for the json format {"request_format": "response"} and it sends "request type" which messes up the whole thing

Can you explain more about what you are trying to do? There might be a way to prompt it differently to get the desired result.

Personally I'd love to have access to a model with a longer context window, like llama 3.1 70B (128K context!!)

I'm using my pro account for an open source telegram-chat-summarizer bot (very light usage, more of a tech demo for friendos) and none of the models available on the pro account inference API work quite as well as I'd like. Llama3-70B produces great results but it can't handle more than ~300 chat messages due to its tiny 8K context, while the Mixtral/Hermes models give me a more generous 32K context but produce lackluster results despite twiddling with the sampler dials.

If we can't get llama3.1, could you at least add like a RoPE config to run llama3-70b with 16K context? 8K is so small.

@cjmoran , did you try llama 3.1 70B on inference api? It has 32k context length

from huggingface_hub import InferenceClient

client = InferenceClient(
    "meta-llama/Meta-Llama-3.1-70B-Instruct",
    token="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
)

for message in client.chat_completion(
    messages=[{"role": "user", "content": "What is the capital of France?"}],
    max_tokens=500,
    stream=True,
):
    print(message.choices[0].delta.content, end="")

@nbroad the 405b instruct is back over api

Well it showed as warm and on and then I came back 5 minutes later and now it shows as cold again

I wouldn't trust it to be available on the Serverless Inference API

@cjmoran , did you try llama 3.1 70B on inference api? It has 32k context length

Edit: removed old message where I misunderstood everything

Okay yeah that seems to be working! I had no idea it was available since it's not listed on the Inference for PROs doc :(

We recently added this to show which models are supported: https://huggingface.co/docs/api-inference/supported-models

Sign up or log in to comment