PreFace
Code vs Natural language classification using bert-small from prajwall, below are the metrics achieved
Training Metrics
Epoch | Training Loss | Validation Loss | Accuracy | |
---|---|---|---|---|
1 | 0.022500 | 0.012705 | 0.997203 | |
2 | 0.008700 | 0.013107 | 0.996880 | |
3 | 0.002700 | 0.014081 | 0.997633 | |
4 | 0.001800 | 0.010666 | 0.997526 | |
5 | 0.000900 | 0.010800 | 0.998063 |
More
- Github repo for installable python package: https://github.com/Vishnunkumar
- Space on the extraction of code blocks from screenshots: https://huggingface.co/spaces/vishnun/SnapCode
- Downloads last month
- 13,414
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.