flexthink
Modified it to create as "lite" version for cases where you only need speaker embeddings
380887d
metadata
language: en
thumbnail: null
tags:
  - speechbrain
  - embeddings
  - Speaker
  - Verification
  - Identification
  - pytorch
  - ECAPA
  - TDNN
  - Discrete_SSL
license: apache-2.0
datasets:
  - voxceleb
metrics:
  - EER
widget:
  - example_title: VoxCeleb Speaker id10003
    src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
  - example_title: VoxCeleb Speaker id10004
    src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav


Standalone ECAPA-TDNN embeddings with discrete_ssl input on Voxceleb

This repository provides all the necessary tools to obtain speaker embeddings with a pretrained ECAPA-TDNN model and discrete audio input using SpeechBrain. It is trained on Voxceleb 1 training data.

Adopted from poonehmousavi/discrete_wavlm_spk_rec_ecapatdn

For a better experience, we encourage you to learn more about SpeechBrain. The model performance on Voxceleb1-test set(Cleaned) is:

Pipeline description

This system is composed of an ECAPA-TDNN model and discrete_ssl model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.

Install SpeechBrain

First of all, please install SpeechBrain with the following command:

pip install git+https://github.com/speechbrain/speechbrain.git@develop

Please notice that we encourage you to read our tutorials and learn more about SpeechBrain.

Compute your speaker embeddings

import torchaudio
from speechbrain.inference.interfaces import foreign_class

classifier = foreign_class(source="flexthink/discrete_wavlm_spk_rec_ecapatdn", pymodule_file="custom_interface.py", classname="DiscreteSpkEmb")

tokens = torch.randint(4, 100, 4)
embeddings = classifier.encode_batch(signal)
print(embeddings.shape)

The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling classify_file if needed. Make sure your input tensor is compliant with the expected sampling rate if you use encode_batch and classify_batch.

Limitations

The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.

Referencing ECAPA-TDNN

@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
  author    = {Brecht Desplanques and
               Jenthe Thienpondt and
               Kris Demuynck},
  editor    = {Helen Meng and
               Bo Xu and
               Thomas Fang Zheng},
  title     = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
               in {TDNN} Based Speaker Verification},
  booktitle = {Interspeech 2020},
  pages     = {3830--3834},
  publisher = {{ISCA}},
  year      = {2020},
}

Citing SpeechBrain

Please, cite SpeechBrain if you use it for your research or business.

@misc{speechbrain,
  title={{SpeechBrain}: A General-Purpose Speech Toolkit},
  author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
  year={2021},
  eprint={2106.04624},
  archivePrefix={arXiv},
  primaryClass={eess.AS},
  note={arXiv:2106.04624}
}

About SpeechBrain