Spaces:
Running
Running
""" | |
TODO: | |
- 统计 tokenizer_impl | |
- 统计 OOV | |
- 统计 reversal | |
- 增加 math,code | |
## balance | |
- 高压缩率 VS vocab_size: | |
- 高压缩率,就意味着,编码后的token数少,那么 token长度 就会长,--> vocab_size 就会太大 | |
- 高压缩率 VS 无损 | |
- s | |
- OOV | |
- OOV 多,那么生成的 UNK 可能多(一个char 一个UNK) --> token 数目多 -> 压缩率低 | |
- OOV 多,那么生成的 UNK 可能少() --> token 数目多 -> 压缩率低 | |
""" | |
import gradio as gr | |
from compression_util import get_compression_leaderboard, common_corpuses | |
# From the perspective of compression | |
# exactly reconstructed from compressed tokens | |
docs = """## 📖 What is a good tokenizer? | |
From a compression perspective, a good tokenizer should be lossless, and keep high compression rate (less tokens). | |
The encoding and decoding process can be formulated as | |
```python | |
token_ids = tokenizer.encode(input_text) # compressed tokens | |
decoded_text = tokenizer.decode(token_ids) # reconstructed text | |
``` | |
- **Lossless** <br> | |
Lossless tokenization preserves the exact original text, i.e. `decoded_text = input_text`. | |
- Most lossy tokenizers get many out-of-vocabulary tokens. 👉 Check the [oov of bert-base-uncased](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/google-bert.bert-base-casedcc100.zh-Hans.diff.json). | |
- Some other tokenizers have no oov, but still be lossy due to text normalization. For example qwen performs [unicode normalization](https://github.com/huggingface/transformers/blob/v4.42.3/src/transformers/models/qwen2/tokenization_qwen2.py#L338), | |
which may bring some [slight difference](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate/Qwen.Qwen1.5-1.8B%20@%20cc100.ja.diff.jsonn) to the reconstructed text. | |
- **Compression Rate** <br> | |
There are mainly two types of metric to represent the `input_text`: | |
- `byte-level`: the number of bytes in the given text | |
- `char-level`: the number of characters in the given text. | |
To evaluate compression rate, simple metrics can be "how many bytes per token" or "how many chars per token". <br> | |
In this leaderboard, we adopt more frequently used metric: "how many billion tokens per gigabytes corpus" and "how many chars | |
per token", i.e. `b_tokens/g_bytes` and `char/token`. | |
💬 [Discussions is Welcome](https://huggingface.co/spaces/eson/tokenizer-arena/discussions) | |
""" | |
# theme = gr.themes.Monochrome() | |
theme = gr.themes.Default() | |
# theme.set(accordion_text_weight=600) # 暂不支持 | |
with gr.Blocks(theme=theme) as demo: | |
# gr.Markdown("## Convertor") | |
# with gr.Accordion("Convertor", open=False): | |
# gr.Markdown("Tokenize {} corpus") | |
# with gr.Row(elem_classes="no-border"): | |
# gr.Button("File Size", min_width=50) | |
# file_size = gr.Textbox( | |
# show_label=False, | |
# min_width=50, | |
# # elem_classes="textbox-as-text" | |
# ) | |
# gr.Dropdown( | |
# choices=['MB', 'GB', 'TB'], | |
# show_label=False, | |
# min_width=15, | |
# # elem_classes="textbox-as-text" | |
# ) | |
# # gr.Markdown('<h2 align="center">≈</h2>') | |
# # gr.HTML('<h2 style="margin: auto;">≈</h2>') | |
# gr.Button( | |
# "≈", | |
# min_width=10, | |
# elem_classes="button-white h2-font" | |
# | |
# ) | |
# | |
# gr.Button( | |
# "Tokens", | |
# min_width=50 | |
# ) | |
# gr.Textbox( | |
# show_label=False, | |
# min_width=50 | |
# ) | |
# gr.Dropdown( | |
# ['million', 'billion', 'trillion'], | |
# show_label=False, | |
# min_width=15, | |
# elem_classes="button-white" | |
# ) | |
gr.Markdown(docs) | |
gr.Markdown("## 🛠️ Setting") # ⚙ | |
gr.Markdown("We perform tokenization on different corpus, and calculate the compression rate." | |
"") | |
with gr.Accordion("Please select the corpus and measure of compression rate.", open=True): | |
# file size 💽 🖴, tokens 🧮 | |
# Total amount of disk used | |
with gr.Row(): | |
with gr.Column(): | |
compress_rate_corpus = gr.Dropdown( | |
common_corpuses, # , "code" | |
value=["cc100/en", "cc100/zh-Hans", "cc100/fr", "cc100/es"], | |
label="corpus", | |
multiselect=True | |
# info="" | |
) | |
# unit of file_size: gigabyte terabyte | |
# unit of token_num: million billion trillion | |
# The most common units of measurement include length (meter, inch, foot), weight (gram, kilogram, pound), volume (liter, gallon, milliliter), time (second, minute, hour) | |
compress_rate_unit = gr.Radio( | |
["b_tokens/g_bytes", "t_tokens/t_bytes"], | |
value="b_tokens/g_bytes", | |
label="measure", # evaluation metric | |
) | |
gr.Markdown( | |
# "Note:\n\n explanation" | |
# "Supported languages are (20): arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)." | |
# " arabic (ar), english (en), spanish (es), french (fr), italian (it), japanese (ja), portuguese (pt), russian (ru), and chinese (zh)." | |
"- `corpus`: tokenization is performed on the selected subsets of [cc100](https://huggingface.co/datasets/statmt/cc100) corpus.\n" | |
"- measure\n" | |
" - `b_tokens/g_bytes` measures how many billion tokens per gigabytes corpus.\n" | |
" - `t_tokens/t_bytes` measures how many trillion tokens per terabytes corpus.\n" | |
# "- `g_bytes/b_tokens` measures how many gigabytes corpus per billion tokens.\n" | |
# "- `t_bytes/t_tokens` measures how many terabytes corpus per trillion tokens.\n" | |
" - `char/token` measures how many chars per token on the tokenized corpus.\n" | |
" - `oov_ratio`: out-of-vocabulary ratio on the selected corpus, 👉 get [oov charset](https://huggingface.co/spaces/eson/tokenizer-arena/raw/main/stats/compression_rate.json)\n\n" | |
"You can reproduce this procedure with [compression_util.py](https://huggingface.co/spaces/eson/tokenizer-arena/blob/main/compression_util.py)." | |
) | |
gr.Markdown("## 🏆 Compression Rate Leaderboard") | |
search_bar = gr.Textbox( | |
placeholder="🔍 Search by tokenizer or organization (e.g., 'llama', 'openai') and press ENTER...", | |
show_label=False, | |
elem_id="search-bar", | |
) | |
compress_rate_table = gr.Dataframe(datatype="html") | |
# func call | |
compress_rate_corpus.change( | |
get_compression_leaderboard, | |
inputs=[compress_rate_corpus, compress_rate_unit, search_bar], | |
outputs=compress_rate_table | |
) | |
compress_rate_unit.change( | |
get_compression_leaderboard, | |
inputs=[compress_rate_corpus, compress_rate_unit, search_bar], | |
outputs=compress_rate_table | |
) | |
# file_size.change( | |
# get_all_compress_rate, | |
# outputs=compress_rate_table | |
# ) | |
search_bar.submit( | |
get_compression_leaderboard, | |
inputs=[ | |
compress_rate_corpus, | |
compress_rate_unit, | |
search_bar, | |
], | |
outputs=compress_rate_table | |
) | |
demo.load( | |
get_compression_leaderboard, | |
inputs=[compress_rate_corpus, compress_rate_unit], | |
outputs=compress_rate_table | |
) | |
if __name__ == "__main__": | |
demo.launch() | |