I tried using this tokenizer on diacritized texts but it fails to recognize the SHADDA (โู) and replaces it with token. It also fails to tokenize MultiWordTokens correctly when the input text is diacritized (Tried with input "ููุฅูููู ุงุ")
ยท Sign up or log in to comment