peterpeter8585 commited on
Commit
311e4b2
1 Parent(s): 5edbc04

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +215 -0
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: HuggingFaceH4/zephyr-7b-beta
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - HuggingFaceH4/ultrafeedback_binarized
6
+ language:
7
+ - en
8
+ license: mit
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - generated_from_trainer
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ widget:
15
+ - example_title: Pirate!
16
+ messages:
17
+ - role: system
18
+ content: You are a pirate chatbot who always responds with Arr!
19
+ - role: user
20
+ content: There's a llama on my lawn, how can I get rid of him?
21
+ output:
22
+ text: Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight,
23
+ but I've got a plan that might help ye get rid of 'im. Ye'll need to gather
24
+ some carrots and hay, and then lure the llama away with the promise of a tasty
25
+ treat. Once he's gone, ye can clean up yer lawn and enjoy the peace and quiet
26
+ once again. But beware, me hearty, for there may be more llamas where that one
27
+ came from! Arr!
28
+ model-index:
29
+ - name: zephyr-7b-beta
30
+ results:
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: AI2 Reasoning Challenge (25-Shot)
36
+ type: ai2_arc
37
+ config: ARC-Challenge
38
+ split: test
39
+ args:
40
+ num_few_shot: 25
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 62.03071672354948
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: HellaSwag (10-Shot)
53
+ type: hellaswag
54
+ split: validation
55
+ args:
56
+ num_few_shot: 10
57
+ metrics:
58
+ - type: acc_norm
59
+ value: 84.35570603465445
60
+ name: normalized accuracy
61
+ source:
62
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: Drop (3-Shot)
69
+ type: drop
70
+ split: validation
71
+ args:
72
+ num_few_shot: 3
73
+ metrics:
74
+ - type: f1
75
+ value: 9.66243708053691
76
+ name: f1 score
77
+ source:
78
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: TruthfulQA (0-shot)
85
+ type: truthful_qa
86
+ config: multiple_choice
87
+ split: validation
88
+ args:
89
+ num_few_shot: 0
90
+ metrics:
91
+ - type: mc2
92
+ value: 57.44916942762855
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: GSM8k (5-shot)
101
+ type: gsm8k
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 12.736921910538287
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
112
+ name: Open LLM Leaderboard
113
+ - task:
114
+ type: text-generation
115
+ name: Text Generation
116
+ dataset:
117
+ name: MMLU (5-Shot)
118
+ type: cais/mmlu
119
+ config: all
120
+ split: test
121
+ args:
122
+ num_few_shot: 5
123
+ metrics:
124
+ - type: acc
125
+ value: 61.07
126
+ name: accuracy
127
+ source:
128
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
129
+ name: Open LLM Leaderboard
130
+ - task:
131
+ type: text-generation
132
+ name: Text Generation
133
+ dataset:
134
+ name: Winogrande (5-shot)
135
+ type: winogrande
136
+ config: winogrande_xl
137
+ split: validation
138
+ args:
139
+ num_few_shot: 5
140
+ metrics:
141
+ - type: acc
142
+ value: 77.7426992896606
143
+ name: accuracy
144
+ source:
145
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
146
+ name: Open LLM Leaderboard
147
+ - task:
148
+ type: text-generation
149
+ name: Text Generation
150
+ dataset:
151
+ name: AlpacaEval
152
+ type: tatsu-lab/alpaca_eval
153
+ metrics:
154
+ - type: unknown
155
+ value: 0.906
156
+ name: win rate
157
+ source:
158
+ url: https://tatsu-lab.github.io/alpaca_eval/
159
+ - task:
160
+ type: text-generation
161
+ name: Text Generation
162
+ dataset:
163
+ name: MT-Bench
164
+ type: unknown
165
+ metrics:
166
+ - type: unknown
167
+ value: 7.34
168
+ name: score
169
+ source:
170
+ url: https://huggingface.co/spaces/lmsys/mt-bench
171
+ ---
172
+
173
+ # peterpeter8585/zephyr-7b-beta-Q4_K_M-GGUF
174
+ This model was converted to GGUF format from [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
175
+ Refer to the [original model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) for more details on the model.
176
+
177
+ ## Use with llama.cpp
178
+ Install llama.cpp through brew (works on Mac and Linux)
179
+
180
+ ```bash
181
+ brew install llama.cpp
182
+
183
+ ```
184
+ Invoke the llama.cpp server or the CLI.
185
+
186
+ ### CLI:
187
+ ```bash
188
+ llama-cli --hf-repo peterpeter8585/zephyr-7b-beta-Q4_K_M-GGUF --hf-file zephyr-7b-beta-q4_k_m.gguf -p "The meaning to life and the universe is"
189
+ ```
190
+
191
+ ### Server:
192
+ ```bash
193
+ llama-server --hf-repo peterpeter8585/zephyr-7b-beta-Q4_K_M-GGUF --hf-file zephyr-7b-beta-q4_k_m.gguf -c 2048
194
+ ```
195
+
196
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
197
+
198
+ Step 1: Clone llama.cpp from GitHub.
199
+ ```
200
+ git clone https://github.com/ggerganov/llama.cpp
201
+ ```
202
+
203
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
204
+ ```
205
+ cd llama.cpp && LLAMA_CURL=1 make
206
+ ```
207
+
208
+ Step 3: Run inference through the main binary.
209
+ ```
210
+ ./llama-cli --hf-repo peterpeter8585/zephyr-7b-beta-Q4_K_M-GGUF --hf-file zephyr-7b-beta-q4_k_m.gguf -p "The meaning to life and the universe is"
211
+ ```
212
+ or
213
+ ```
214
+ ./llama-server --hf-repo peterpeter8585/zephyr-7b-beta-Q4_K_M-GGUF --hf-file zephyr-7b-beta-q4_k_m.gguf -c 2048
215
+ ```