The data is used in project https://github.com/Alexander92-cpu/LanguageModel_Fusion
Data desciption:
'asr/stt_en_conformer_transducer_small.nemo' - NeMo ASR pre-trained RNN-T model (https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_conformer_transducer_small);
'gpt2' - fine-tuned GPT-2 LM model for rescoring (https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2LMHeadModel);
'kenlm/4_ngram_output.bin' - 4-gram language model;
'lstm' - trained from scratch word-level LSTM LM model and the corresponding tokenizer;
'text' - contains text data used for training, validation, and testing.
'optimize' - data and results of optimization experiments
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.