lodrick-the-lafted commited on
Commit
1972aa1
1 Parent(s): 93b2164

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - lodrick-the-lafted/Hermes-100K
5
+ - garage-bAInd/Open-Platypus
6
+ - jondurbin/airoboros-3.2
7
+ ---
8
+
9
+ <img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-B-2x7B/resolve/main/ghp.png>
10
+
11
+ # Grafted-Hermetic-Platypus-B-2x7B
12
+
13
+ MoE merge of
14
+ - [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
15
+ - [Hermes-Instruct-7B-100K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-100K)
16
+
17
+ <br />
18
+ <br />
19
+
20
+ # Prompt Format
21
+
22
+ Both the default Mistral-Instruct tags and Alpaca are fine, so either:
23
+ ```
24
+ <s>[INST] {sys_prompt} {instruction} [/INST]
25
+ ```
26
+
27
+ or
28
+
29
+
30
+ ```
31
+ {sys_prompt}
32
+
33
+ ### Instruction:
34
+ {instruction}
35
+
36
+ ### Response:
37
+
38
+ ```
39
+ The tokenizer default is Alpaca this time around.
40
+
41
+ <br />
42
+ <br />
43
+
44
+ # Usage
45
+
46
+ ```python
47
+ from transformers import AutoTokenizer
48
+ import transformers
49
+ import torch
50
+
51
+ model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained(model)
54
+ pipeline = transformers.pipeline(
55
+ "text-generation",
56
+ model=model,
57
+ model_kwargs={"torch_dtype": torch.bfloat16},
58
+ )
59
+
60
+ messages = [{"role": "user", "content": "Give me a cooking recipe for an orange pie."}]
61
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
62
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
63
+ print(outputs[0]["generated_text"])
64
+ ```
65
+