This model is part of the GrammarCorrector tool.
"FlanT5 from scratch for the grammar correction tool" article about how this models was trained:
FlanT5 was trained using JFLEG dataset. The primary objective of the experiment was to develop a highly effective tool using relatively small models, minimal datasets, and constrained computational resources.
To accomplish this goal, we implemented two key strategies:
- Perplexity-Based Data Pruning With Small Reference Models.
- A simple sampling and voting method for multiple LLM agents. model was trained.
- Downloads last month
- 308
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for akhmat-s/t5-base-grammar-corrector
Base model
google-t5/t5-base