Upload README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ datasets:
|
|
6 |
inference: false
|
7 |
language:
|
8 |
- en
|
9 |
-
license:
|
10 |
model-index:
|
11 |
- name: zephyr-7b-alpha
|
12 |
results: []
|
@@ -316,7 +316,7 @@ Zephyr is a series of language models that are trained to act as helpful assista
|
|
316 |
|
317 |
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
|
318 |
- **Language(s) (NLP):** Primarily English
|
319 |
-
- **License:**
|
320 |
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
321 |
|
322 |
### Model Sources
|
@@ -338,11 +338,23 @@ from transformers import pipeline
|
|
338 |
|
339 |
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
|
340 |
|
341 |
-
# We use
|
342 |
-
|
343 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
344 |
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
345 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
346 |
```
|
347 |
|
348 |
## Bias, Risks, and Limitations
|
@@ -372,6 +384,7 @@ Zephyr 7B Alpha achieves the following results on the evaluation set:
|
|
372 |
### Training hyperparameters
|
373 |
|
374 |
The following hyperparameters were used during training:
|
|
|
375 |
- learning_rate: 5e-07
|
376 |
- train_batch_size: 2
|
377 |
- eval_batch_size: 4
|
|
|
6 |
inference: false
|
7 |
language:
|
8 |
- en
|
9 |
+
license: mit
|
10 |
model-index:
|
11 |
- name: zephyr-7b-alpha
|
12 |
results: []
|
|
|
316 |
|
317 |
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
|
318 |
- **Language(s) (NLP):** Primarily English
|
319 |
+
- **License:** MIT
|
320 |
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
321 |
|
322 |
### Model Sources
|
|
|
338 |
|
339 |
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
|
340 |
|
341 |
+
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
342 |
+
messages = [
|
343 |
+
{
|
344 |
+
"role": "system",
|
345 |
+
"content": "You are a friendly chatbot who always responds in the style of a pirate",
|
346 |
+
},
|
347 |
+
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
|
348 |
+
]
|
349 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
350 |
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
351 |
+
print(outputs[0]["generated_text"])
|
352 |
+
# <|system|>
|
353 |
+
# You are a friendly chatbot who always responds in the style of a pirate.</s>
|
354 |
+
# <|user|>
|
355 |
+
# How many helicopters can a human eat in one sitting?</s>
|
356 |
+
# <|assistant|>
|
357 |
+
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
|
358 |
```
|
359 |
|
360 |
## Bias, Risks, and Limitations
|
|
|
384 |
### Training hyperparameters
|
385 |
|
386 |
The following hyperparameters were used during training:
|
387 |
+
|
388 |
- learning_rate: 5e-07
|
389 |
- train_batch_size: 2
|
390 |
- eval_batch_size: 4
|