iqbalamo93 commited on
Commit
21541f1
1 Parent(s): a97d8aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -2
README.md CHANGED
@@ -12,7 +12,9 @@ base_model:
12
 
13
  This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.
14
 
15
- adapter_name = TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat
 
 
16
 
17
  ## Model Details
18
 
@@ -32,4 +34,31 @@ bnb_config = BitsAndBytesConfig(
32
  ### Model Description
33
  This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.
34
 
35
- - Finetuned from model : [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.
14
 
15
+ ```python
16
+ adapter_name = 'iqbalamo93/TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat'
17
+ ```
18
 
19
  ## Model Details
20
 
 
34
  ### Model Description
35
  This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.
36
 
37
+ - Finetuned from model : [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
38
+
39
+ ### How to use
40
+
41
+ #### Method 1: Direct loading
42
+ ```python
43
+ from peft import PeftModel, AutoPeftModelForCausalLM
44
+ from transformers import pipeline, AutoTokenizer
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
47
+ adapter_name = 'iqbalamo93/TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat'
48
+ model = AutoPeftModelForCausalLM.from_pretrained(
49
+ adapter_name,
50
+ device_map="auto"
51
+ )
52
+ model = model.merge_and_unload()
53
+
54
+ prompt = """<|user|>
55
+ Tell me something about Large Language Models.</s>
56
+ <|assistant|>
57
+ """
58
+
59
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
60
+ print(pipe(prompt)[0]["generated_text"])
61
+
62
+ ```
63
+ #### Method 2: Merging with base mode explicitly
64
+ todo