Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
onnx
/
EfficientNet-Lite4
like
8
Runtime error
App
Files
Files
Community
2
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
Possible to show how the model was integer-8 quantized?
#1 opened over 2 years ago by
carted-ml