Edit model card

BLOSSOM-v5-llama3-8b

💻Github • 🚀Blossom Chat Demo

What's new?

The Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements.

Introduction

Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Meta-Llama-3-8B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.

Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.

Inference

Inference is performed in the form of dialogue continuation.

Single-turn dialogue

A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: 

Multi-turn dialogue

A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?<|end_of_text|>
|Human|: Generate a random number using python
|Bot|: 

Note: At the end of the Bot's output in the historical conversation, append a <|end_of_text|>.

Downloads last month
50
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Azure99/blossom-v5-llama3-8b

Finetunes
1 model
Merges
1 model
Quantizations
1 model

Datasets used to train Azure99/blossom-v5-llama3-8b

Spaces using Azure99/blossom-v5-llama3-8b 5