afrideva commited on
Commit
084ea81
1 Parent(s): 879836c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1
3
+ datasets:
4
+ - databricks/databricks-dolly-15k
5
+ inference: false
6
+ language:
7
+ - en
8
+ license: apache-2.0
9
+ model_creator: habanoz
10
+ model_name: TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1
11
+ pipeline_tag: text-generation
12
+ quantized_by: afrideva
13
+ tags:
14
+ - gguf
15
+ - ggml
16
+ - quantized
17
+ - q2_k
18
+ - q3_k_m
19
+ - q4_k_m
20
+ - q5_k_m
21
+ - q6_k
22
+ - q8_0
23
+ ---
24
+ # habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF
25
+
26
+ Quantized GGUF model files for [TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1](https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1) from [habanoz](https://huggingface.co/habanoz)
27
+
28
+
29
+ | Name | Quant method | Size |
30
+ | ---- | ---- | ---- |
31
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.fp16.gguf) | fp16 | 2.20 GB |
32
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q2_k.gguf) | q2_k | 483.12 MB |
33
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q3_k_m.gguf) | q3_k_m | 550.82 MB |
34
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q4_k_m.gguf) | q4_k_m | 668.79 MB |
35
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q5_k_m.gguf) | q5_k_m | 783.02 MB |
36
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q6_k.gguf) | q6_k | 904.39 MB |
37
+ | [tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF/resolve/main/tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.q8_0.gguf) | q8_0 | 1.17 GB |
38
+
39
+
40
+
41
+ ## Original Model Card:
42
+ TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T finetuned using dolly dataset.
43
+
44
+ Training took 1 hour on an 'ml.g5.xlarge' instance.
45
+
46
+
47
+ ```python
48
+ hyperparameters ={
49
+ 'num_train_epochs': 3, # number of training epochs
50
+ 'per_device_train_batch_size': 6, # batch size for training
51
+ 'gradient_accumulation_steps': 2, # Number of updates steps to accumulate
52
+ 'gradient_checkpointing': True, # save memory but slower backward pass
53
+ 'bf16': True, # use bfloat16 precision
54
+ 'tf32': True, # use tf32 precision
55
+ 'learning_rate': 2e-4, # learning rate
56
+ 'max_grad_norm': 0.3, # Maximum norm (for gradient clipping)
57
+ 'warmup_ratio': 0.03, # warmup ratio
58
+ "lr_scheduler_type":"constant", # learning rate scheduler
59
+ 'save_strategy': "epoch", # save strategy for checkpoints
60
+ "logging_steps": 10, # log every x steps
61
+ 'merge_adapters': True, # wether to merge LoRA into the model (needs more memory)
62
+ 'use_flash_attn': True, # Whether to use Flash Attention
63
+ }
64
+
65
+ ```