Edit model card

Qwen1.5-MoE-A2.7B

Introduction

Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.

For more details, please refer to our blog post and GitHub repo.

Model Details

Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, Qwen1.5-MoE-A2.7B is upcycled from Qwen-1.8B. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieving comparable performance to Qwen1.5-7B, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of Qwen1.5-7B.

Requirements

The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command pip install git+https://github.com/huggingface/transformers, or you might encounter the following error:

KeyError: 'qwen2_moe'.

Usage

We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.

Downloads last month
23,157
Safetensors
Model size
14.3B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Qwen/Qwen1.5-MoE-A2.7B

Adapters
1 model
Finetunes
2 models

Spaces using Qwen/Qwen1.5-MoE-A2.7B 5

Collection including Qwen/Qwen1.5-MoE-A2.7B