Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
programming-language
code
program-synthesis
automatic-code-repair
code-retrieval
code-translation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ configs:
|
|
57 |
# xCodeEval
|
58 |
[xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval](https://arxiv.org/abs/2303.03004)
|
59 |
|
60 |
-
We introduce **xCodeEval**, the largest executable multilingual multitask benchmark to date consisting of
|
61 |
|
62 |
This repository contains the sample code and data link for xCodeEval [paper](https://arxiv.org/abs/2303.03004).
|
63 |
|
|
|
57 |
# xCodeEval
|
58 |
[xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval](https://arxiv.org/abs/2303.03004)
|
59 |
|
60 |
+
We introduce **xCodeEval**, the largest executable multilingual multitask benchmark to date consisting of 25 M document-level coding examples from about 7.5 K unique problems covering up to 17 programming languages with execution-level parallelism. It features a total of seven tasks involving code understanding, generation, translation and retrieval, and it employs an execution-based evaluation. We develop a test-case based multilingual code execution engine, [**ExecEval**](https://github.com/ntunlp/ExecEval) that supports all the programming languages in **xCodeEval**. We also propose a novel data splitting and a data selection schema for balancing data distributions over multiple attributes based on geometric mean and graph-theoretic principle.
|
61 |
|
62 |
This repository contains the sample code and data link for xCodeEval [paper](https://arxiv.org/abs/2303.03004).
|
63 |
|