princeton-nlp
commited on
Commit
•
3f22758
1
Parent(s):
29ba267
Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ SWE-bench is a dataset that tests systems’ ability to solve GitHub issues auto
|
|
60 |
|
61 |
The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770)
|
62 |
|
63 |
-
This dataset `SWE-bench_bm25_13K` includes a formatting of each instance using Pyserini's BM25 retrieval as described in the paper. The context limit
|
64 |
The `text` column can be used directly with LMs to generate patch files.
|
65 |
Models are instructed to generate [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template:
|
66 |
```diff
|
|
|
60 |
|
61 |
The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770)
|
62 |
|
63 |
+
This dataset `SWE-bench_bm25_13K` includes a formatting of each instance using Pyserini's BM25 retrieval as described in the paper. The code context size limit is 13,000 `cl100k_base` tokens from the [`tiktoken`](https://github.com/openai/tiktoken) tokenization package used for OpenAI models.
|
64 |
The `text` column can be used directly with LMs to generate patch files.
|
65 |
Models are instructed to generate [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template:
|
66 |
```diff
|