Problem with inference endpoint.

#64
by goporo - opened

The model behave poorly and return different quality on inference endpoint vs the free version inference api. Also where is the quickest place to deploy. Please help!

Sign up or log in to comment