โ‰๏ธ FAQ โ‰๏ธ - Start here before opening an issue

#1
by BearSean - opened

Hi! Thank you for your interest in the ๐Ÿš€Open Ko-LLM Leaderboard!
Below are some common questions - if this FAQ does not answer what you need, feel free to create a new issue, and we'll take care of it as soon as we can!

  • โ“The leaderboard has crashed with a connection error, help!
    This happens from time to time, and is normal, don't worry. The leaderboard will be automatically restarted in less than an hour (or earlier if one of the maintainers notices it). Please only open an issue if the leaderboard is down for longer than an hour.

  • โ“Why do models appear several times in the leaderboard?
    We run evaluations with user selected precision and model commit. Sometimes, users submit specific models at different commits and at different precisions (for example, in float16 and 4bit to see how quantization affects performance). You should be able to verify this by displaying the precision and model sha columns in the display. If, however, you see models appearing several time with the same precision and hash commit, this is not normal.

  • โ“Why don't you display closed source model scores?
    This is a leaderboard for Open models, both for philosophical reasons (openness is cool) and for practical reasons: we want to ensure that the results we display are accurate and reproducible, but 1) commercial closed models can change their API thus rendering any scoring at a given time incorrect 2) we re-run everything on our cluster to ensure all models are run on the same setup and you can't do that for these models.

  • โ“What about models of type X?
    We only support models that have been integrated in a stable version of the transformers library for automatic submission. We are doing our best to extend the leaderboard to new models and evaluations.

  • โ“My model disappeared from all the queues, what happened?
    A model disappearing from all the queues usually means that there has been a failure. You can check if that is the case by looking at your model here.

  • โ“What causes an evaluation failure?
    Most of the failures we get come from problems in the submissions (corrupted files, config problems, wrong parameters selected for eval ...), so we'll be grateful if you first make sure you have followed the steps in About. However, from time to time, we have failures on our side (hardware/node failures, problem with an update of our backend, connectivity problem ending up in the results not being saved, ...). As we store the logs for all models, feel free to create an issue, where you link to the requests file of your model (look for it here), so we can investigate! If the model failed due to a problem on our side, we'll relaunch it right away!
    Note: Please do not re-upload your model under a different name, it will not help

  • โ“I upgraded my model and want to re-submit, how can I do that?
    Please open an issue with the precise name of your model, and we'll remove your model from the leaderboard so you can resubmit.

  • โ“What is this concept of "flagging"?
    This mechanism allows user to report models that have unfair performance on the leaderboard. This contains several categories: exceedingly good results on the leaderboard because the model was (maybe accidentally) trained on the evaluation data, models that are copy of other models not attributed properly, etc.

  • โ“My model has been flagged improperly, what can I do?
    Every flagged model has a discussion associated with it - feel free to plead your case there, and we'll see what to do.

BearSean changed discussion title from โ‰๏ธ FAQ โ‰๏ธ - ์งˆ๋ฌธ์„ ๋“ฑ๋กํ•˜๊ธฐ ์ „์— ๊ผญ ๋ด์ฃผ์„ธ์š” to โ‰๏ธ FAQ โ‰๏ธ - Start here before opening an issue
41ow1ives pinned discussion

Is there any way to download the table as a csv/json? I've tried the following script but did not work:

client = Client("https://upstage-open-ko-llm-leaderboard.hf.space/")
json_data = client.predict("","", api_name='/predict')

with open(json_data, 'r') as file:
    file_data = file.read()
    data = json.loads(file_data)
    df = pd.DataFrame(data['data'], columns=data['headers'])
    df.drop(columns=['Model'], inplace=True)
    df.to_json(path_data / 'HF-open-ko-llm-leaderboard_20231114.json', orient='records', indent=4)
upstage org

@zhiminy Hello, Are you want to download the leaderboard table?

Hi, could you share how many shots used for evaluation? I can find how many shots used for evaluation in Open LLM Leaderboard(https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), but I can't fine here. Can I guess every testing was going under zero-shot? Thank you so much!

upstage org

@gangkongkong
Hello, it is same the english leaderboard.
Ko-commongenv2 = 2
truthfulQA = 0
arc_challenge = 25
hellaswag = 10
MMLU = 5

์•ˆ๋…•ํ•˜์„ธ์š”. OrionStarAI/Orion-14B-Base ๊ธฐ๋ฐ˜์œผ๋กœ ํŠœ๋‹์„ ํ–ˆ๋Š”๋ฐ,
์•„์ง tf ์—์„œ ๊ณต์‹ ์ง€์›์„ ํ•˜์ง€ ์•Š์•„ ์ฝ”๋“œ๊ฐ€ ์ถ”๊ฐ€์ ์œผ๋กœ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค.
trust_code ๋ฅผ ํ™œ์„ฑํ™” ํ•ด ์ฃผ์‹ค์ˆ˜ ์žˆ์œผ์‹ ์ง€ ํ•˜์—ฌ ๋ฌธ์˜๋“œ๋ฆฝ๋‹ˆ๋‹ค.

@zhiminy Hello, Are you want to download the leaderboard table?

Yeah, is that possible? @choco9966

This comment has been hidden

SFT training์„ ํ•œ ๋ชจ๋ธ์„ ๋ฆฌ๋”๋ณด๋“œ์— ์˜ฌ๋ ค๋ณด๊ณ  ์‹ถ์Šต๋‹ˆ๋‹ค.
๊ถ๊ธˆํ•œ ๊ฒƒ์ด SFT dataset์„ ๊ตฌ์„ฑํ• ๋•Œ..

Human : ~~~~~

Assistant : ~~~~

์œ„์™€ ๊ฐ™์ด ๊ตฌ์„ฑ์„ ํ•ด์„œ ํ•™์Šต์„ ํ–ˆ๋Š”๋ฐ..
ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ์…‹์€

[์‚ฌ๋žŒ] :
[๋ด‡] :

์ด๋Ÿฐ ์‹์œผ๋กœ ๋˜์–ด ์žˆ์„ ์ˆ˜๋„ ์žˆ๊ณ  ํ•ด์„œ..

dataset์„ ๊ตฌ์„ฑํ• ๋•Œ ์ด๋Ÿฐ ๋ถ€๋ถ„์„ ๋งž์ถฐ์ค˜์•ผ ํ•˜๋Š” ๊ฒƒ์ธ์ง€..
์•„๋‹ˆ๋ฉด ๊ณต๊ฐœ๊ฐ€ ์•ˆ๋˜์–ด ์žˆ๊ณ , ์–ด๋–ค ํ˜•์‹์ด๋ผ๋„ ๊ฐ€๋Šฅํ•˜๋„๋ก dataset ๊ตฌ์„ฑ ๋ฐ ํ›ˆ๋ จ์„ ํ•ด์•ผ ํ•˜๋Š” ๊ฒƒ์ธ์ง€ ๊ถ๊ธˆํ•ฉ๋‹ˆ๋‹ค.

Could you tell me about which metric exactly used in each bemchmark?

Such as acc_norm, mc2, and acc

Sign up or log in to comment