Spaces:
Runtime error
Runtime error
Commit
β’
a0b557b
1
Parent(s):
1df8383
Update src/assets/text_content.py
Browse files
src/assets/text_content.py
CHANGED
@@ -59,7 +59,7 @@ TITLE = """<h1 align="center" id="space-title">π€ Open LLM Leaderboard</h1>"""
|
|
59 |
INTRODUCTION_TEXT = f"""
|
60 |
π The π€ Open LLM Leaderboard aims to track, rank and evaluate LLMs and chatbots as they are released.
|
61 |
|
62 |
-
π€ Anyone from the community can submit a model for automated evaluation on the π€ GPU cluster, as long as it is a π€ Transformers model with weights on the Hub. We also support evaluation of models with delta-weights for non-commercial licensed models, such as LLaMa.
|
63 |
|
64 |
Other cool benchmarks for LLMs are developped at HuggingFace, go check them out: ππ€ [human and GPT4 evals](https://huggingface.co/spaces/HuggingFaceH4/human_eval_llm_leaderboard), π₯οΈ [performance benchmarks](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)
|
65 |
"""
|
|
|
59 |
INTRODUCTION_TEXT = f"""
|
60 |
π The π€ Open LLM Leaderboard aims to track, rank and evaluate LLMs and chatbots as they are released.
|
61 |
|
62 |
+
π€ Anyone from the community can submit a model for automated evaluation on the π€ GPU cluster, as long as it is a π€ Transformers model with weights on the Hub. We also support evaluation of models with delta-weights for non-commercial licensed models, such as the original LLaMa release.
|
63 |
|
64 |
Other cool benchmarks for LLMs are developped at HuggingFace, go check them out: ππ€ [human and GPT4 evals](https://huggingface.co/spaces/HuggingFaceH4/human_eval_llm_leaderboard), π₯οΈ [performance benchmarks](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)
|
65 |
"""
|