Text Generation
Transformers
PyTorch
olmo
Inference Endpoints
upiter commited on
Commit
6e33bdb
1 Parent(s): 6ce6271

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  datasets:
4
  - bigcode/the-stack
5
  - HuggingFaceFW/fineweb
 
 
6
  ---
7
 
8
 
@@ -24,6 +26,9 @@ Despite being trained on only 72 billion tokens of text, the models outperform m
24
 
25
  **Instruction Tuning Data** TinyCodeLMs are instruction tuned on paired instruction and Python edit sequence data. These edit sequences are generated with the LintSeq algorithm over a source dataset of paired instruction and Python programs drawn from the Magicoder and StarCoder2 OSS-Instruct datasets (Wei et al., 2024).
26
 
 
 
 
27
  # Benchmarks
28
 
29
  **Pretrained (Temperature 0)**
@@ -54,3 +59,6 @@ Despite being trained on only 72 billion tokens of text, the models outperform m
54
  primaryClass={cs.LG}
55
  }
56
  ```
 
 
 
 
3
  datasets:
4
  - bigcode/the-stack
5
  - HuggingFaceFW/fineweb
6
+ base_model:
7
+ - upiter/TinyCodeLM-400M
8
  ---
9
 
10
 
 
26
 
27
  **Instruction Tuning Data** TinyCodeLMs are instruction tuned on paired instruction and Python edit sequence data. These edit sequences are generated with the LintSeq algorithm over a source dataset of paired instruction and Python programs drawn from the Magicoder and StarCoder2 OSS-Instruct datasets (Wei et al., 2024).
28
 
29
+ # Training Details
30
+ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs) for two epochs. Pretraining took about two days and six days, respectively. Instruction tuning was conducted on a single H100 GPU using DeepSpeed and took no more than several hours.
31
+
32
  # Benchmarks
33
 
34
  **Pretrained (Temperature 0)**
 
59
  primaryClass={cs.LG}
60
  }
61
  ```
62
+
63
+ # Safety
64
+ This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has thepotential to be harmful and must not be executed without precautions.