iqbalamo93 commited on
Commit
c2eecdb
1 Parent(s): b2b1191

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -38,7 +38,7 @@ This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLla
38
 
39
  ### How to use
40
 
41
- #### Method 1: Direct loading
42
  ```python
43
  from peft import PeftModel, AutoPeftModelForCausalLM
44
  from transformers import pipeline, AutoTokenizer
@@ -60,11 +60,8 @@ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
60
  print(pipe(prompt)[0]["generated_text"])
61
 
62
  ```
63
- #### Method 2: Merging with base mode explicitly
64
- todo
65
 
66
-
67
- ### Method 3: direct loading
68
 
69
  ```python
70
  model = AutoModelForCausalLM.from_pretrained(adapter_name,
@@ -78,4 +75,8 @@ Tell me something about Large Language Models.</s>
78
 
79
  pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
80
  print(pipe(prompt)[0]["generated_text"])
81
- ```
 
 
 
 
 
38
 
39
  ### How to use
40
 
41
+ #### Method 1: Direct loading via AutoPeftModel
42
  ```python
43
  from peft import PeftModel, AutoPeftModelForCausalLM
44
  from transformers import pipeline, AutoTokenizer
 
60
  print(pipe(prompt)[0]["generated_text"])
61
 
62
  ```
 
 
63
 
64
+ ### Method 2: direct loading AutoModel
 
65
 
66
  ```python
67
  model = AutoModelForCausalLM.from_pretrained(adapter_name,
 
75
 
76
  pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
77
  print(pipe(prompt)[0]["generated_text"])
78
+ ```
79
+
80
+ #### Method 2: Merging with base mode explicitly
81
+ todo
82
+