Edit model card

Mobius RWKV 12B v4 version

This is a experimental model for long ctx, checkout our new model here Timemobius

Good at Writing and role play, can do some rag and try to solve repeating.

Mobius Chat 12B 128K

Introduction

Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data. In comparison with the previous released Mobius, the improvements include:

  • Only 24G Vram to run this model locally with fp16;
  • Significant performance improvement;
  • Multilingual support ;
  • Stable support of 128K context length.
  • Base model Mobius-mega-12B-128k-base

Usage

We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability.

Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8

More details

Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community

  • 10~100 trainning/inference cost reduce;
  • state based,selected memory, which mean good at grok;
  • community support.

requirements

24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.

future plan

If you need a HF version let us know

Mobius-Chat-12B-128k

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .