Papers
arxiv:2407.07565

On Leakage of Code Generation Evaluation Datasets

Published on Jul 10
· Submitted by davanstrien on Jul 11
Authors:
,
,
,

Abstract

In this paper we consider contamination by code generation test sets, in particular in their use in modern large language models. We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection. Key to our findings is a new dataset of 161 prompts with their associated python solutions, dataset which is released at https://huggingface.co/datasets/CohereForAI/lbpp .

Community

Paper submitter
This comment has been hidden
Paper submitter

Screenshot 2024-07-11 at 11.23.19.png

Providing a supporting evidence: https://github.com/ise-uiuc/magicoder/issues/40
I believe that due to the weak decontamination of the training set, any publicly available test set is likely to be memorized by the models. Therefore, we should abandon benchmarks like HumanEval and MBPP, and instead move towards newer test sets like LiveCodeBench.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.07565 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.07565 in a Space README.md to link it from this page.

Collections including this paper 4