ammarnasr commited on
Commit
190cc9e
1 Parent(s): 9edbbcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -25,5 +25,34 @@ dataset_info:
25
  num_bytes: 3982797.09401595
26
  num_examples: 897
27
  download_size: 1323156008
28
- dataset_size: 3980279540.0
 
 
 
 
 
 
 
 
 
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  num_bytes: 3982797.09401595
26
  num_examples: 897
27
  download_size: 1323156008
28
+ dataset_size: 3980279540
29
+ task_categories:
30
+ - text-generation
31
+ language:
32
+ - code
33
+ tags:
34
+ - code
35
+ pretty_name: TheStack-Java
36
+ size_categories:
37
+ - 1M<n<10M
38
  ---
39
+
40
+ ## Dataset 1: TheStack - Java - Cleaned
41
+
42
+ **Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Java, a popular statically typed language.
43
+
44
+ **Target Language**: Java
45
+
46
+ **Dataset Size**:
47
+ - Training: 900,000 files
48
+ - Validation: 50,000 files
49
+ - Test: 50,000 files
50
+
51
+ **Preprocessing**:
52
+ 1. Selected Java as the target language due to its popularity on GitHub.
53
+ 2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
54
+ 3. Split files into 90% training, 5% validation, and 5% test sets.
55
+
56
+ **Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
57
+
58
+ **Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning).