clefourrier HF staff commited on
Commit
a0b557b
β€’
1 Parent(s): 1df8383

Update src/assets/text_content.py

Browse files
Files changed (1) hide show
  1. src/assets/text_content.py +1 -1
src/assets/text_content.py CHANGED
@@ -59,7 +59,7 @@ TITLE = """<h1 align="center" id="space-title">πŸ€— Open LLM Leaderboard</h1>"""
59
  INTRODUCTION_TEXT = f"""
60
  πŸ“ The πŸ€— Open LLM Leaderboard aims to track, rank and evaluate LLMs and chatbots as they are released.
61
 
62
- πŸ€— Anyone from the community can submit a model for automated evaluation on the πŸ€— GPU cluster, as long as it is a πŸ€— Transformers model with weights on the Hub. We also support evaluation of models with delta-weights for non-commercial licensed models, such as LLaMa.
63
 
64
  Other cool benchmarks for LLMs are developped at HuggingFace, go check them out: πŸ™‹πŸ€– [human and GPT4 evals](https://huggingface.co/spaces/HuggingFaceH4/human_eval_llm_leaderboard), πŸ–₯️ [performance benchmarks](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)
65
  """
 
59
  INTRODUCTION_TEXT = f"""
60
  πŸ“ The πŸ€— Open LLM Leaderboard aims to track, rank and evaluate LLMs and chatbots as they are released.
61
 
62
+ πŸ€— Anyone from the community can submit a model for automated evaluation on the πŸ€— GPU cluster, as long as it is a πŸ€— Transformers model with weights on the Hub. We also support evaluation of models with delta-weights for non-commercial licensed models, such as the original LLaMa release.
63
 
64
  Other cool benchmarks for LLMs are developped at HuggingFace, go check them out: πŸ™‹πŸ€– [human and GPT4 evals](https://huggingface.co/spaces/HuggingFaceH4/human_eval_llm_leaderboard), πŸ–₯️ [performance benchmarks](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)
65
  """