Edit model card

Wav2Vec2-Large-VoxPopuli

Facebook's Wav2Vec2 large model pretrained on the it unlabeled subset of VoxPopuli corpus.

Paper: VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation

Authors: Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux from Facebook AI

See the official website for more information, here

Fine-Tuning

Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace "facebook/wav2vec2-large-xlsr-53" with this checkpoint for fine-tuning.

Downloads last month
170
Safetensors
Model size
317M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including facebook/wav2vec2-large-it-voxpopuli