Datasets:

Languages:
code
ArXiv:
Tags:
code
License:

HumanEvalPack Paper

#1
by davide221 - opened

Here I see only the synthetize code sub task of the octo paper, where can I find explain and fix benchmarks?

BigCode org

they are all in here; e.g. buggy_solution is used for HumanEvalFix

you can check https://github.com/bigcode-project/bigcode-evaluation-harness/blob/main/bigcode_eval/tasks/humanevalpack.py for how they are constructed & https://github.com/bigcode-project/octopack for other details

didn't notice, thank you!

davide221 changed discussion status to closed

Sign up or log in to comment