Text Generation
Transformers
PyTorch
olmo
Inference Endpoints
upiter commited on
Commit
6ce6271
1 Parent(s): 351df30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -27,7 +27,6 @@ Despite being trained on only 72 billion tokens of text, the models outperform m
27
  # Benchmarks
28
 
29
  **Pretrained (Temperature 0)**
30
-
31
  |**Benchmark**|**TinyCodeLM 150M** |**TinyCodeLM 400M** |
32
  | :--------------------- | -----------------: | -----------------: |
33
  | HumanEval, pass@1 | 6.1 | 6.7 |
@@ -35,7 +34,6 @@ Despite being trained on only 72 billion tokens of text, the models outperform m
35
 
36
 
37
  **Edit Sequence / Instruction Tuned (Temperature-Tuned)**
38
-
39
  |**Benchmark** |**TinyCodeLM 150M** |**TinyCodeLM 400M** |
40
  | :----------- | -----------------: | -----------------: |
41
  | HumanEval, pass@1 | 12.8 | 13.4 |
@@ -47,7 +45,7 @@ Despite being trained on only 72 billion tokens of text, the models outperform m
47
  # Citation
48
 
49
  ```
50
- @misc{piterbarg2024training,
51
  title={Training Language Models on Synthetic Edit Sequences Improves Code Synthesis},
52
  author={Ulyana Piterbarg and Lerrel Pinto and Rob Fergus},
53
  year={2024},
 
27
  # Benchmarks
28
 
29
  **Pretrained (Temperature 0)**
 
30
  |**Benchmark**|**TinyCodeLM 150M** |**TinyCodeLM 400M** |
31
  | :--------------------- | -----------------: | -----------------: |
32
  | HumanEval, pass@1 | 6.1 | 6.7 |
 
34
 
35
 
36
  **Edit Sequence / Instruction Tuned (Temperature-Tuned)**
 
37
  |**Benchmark** |**TinyCodeLM 150M** |**TinyCodeLM 400M** |
38
  | :----------- | -----------------: | -----------------: |
39
  | HumanEval, pass@1 | 12.8 | 13.4 |
 
45
  # Citation
46
 
47
  ```
48
+ @misc{piterbarg2024editseq,
49
  title={Training Language Models on Synthetic Edit Sequences Improves Code Synthesis},
50
  author={Ulyana Piterbarg and Lerrel Pinto and Rob Fergus},
51
  year={2024},