It seems like there is server overload or downtime of upstream ?
#75
by
prithivMLmods
- opened
I am also facing the same issue
The same here. Maybe the model is being updated.
Back to Live 🚀
So you have all learned an important lesson this week: Download and run your models locally or pay a service.
@Nurb4000
So, APIs and endpoints are for testing and easier deployment, right? InferenceCLI might be better in this context, rather than downloading a model.
Public ones are ok for testing. Not to be relied on for production.
- That's what I mean. [ inference as a service ]
prithivMLmods
changed discussion status to
closed