This tokenizer was trained on a small corpus of concatenated ARPAbet pronunciation tokens + punctuation from the python g2p_en library computed over the entire synthbot/pony-speech
dataset and 240k lines from generics_kb_best
, from community-datasets/generics_kb
.
i.e. But one on one, let's clean it.
-> BAH1T WAH1N AA1N WAH1N , LEH1TS KLIY1N IH1T .
Uses BPE with vocab size of 384.