Edit model card

mpt-7b-instruct: sharded

This is a version of the mpt-7b-instruct model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab). The weights are stored in bfloat16 so in theory you can run this on CPU, though it may take forever. Original code and credits go to mpt-7b-storywriter-sharded. See the community discussion on how to replicate this.

Please refer to the previously linked repo for details on usage/implementation/etc. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license.

Basic Usage

Note when using: this is not an instruction-tuned model, so you need to give it sufficient input text to continue generating something on-topic with your prompt

Install/upgrade packages:

pip install -U torch transformers accelerate einops

Load the model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = 'jprafael/mpt-7b-instruct-sharded'
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    revision='8d8911ad980f48f8a791e5f5876dea891dcbc064', # optional, but a good idea
    device_map='auto',
    load_in_8bit=False, # install bitsandbytes then set to true for 8-bit
)
model = torch.compile(model)
tokenizer = AutoTokenizer.from_pretrained(model_name)

Then you can use model.generate() as you would normally - see the notebook for details.


Downloads last month
17
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train jprafael/mpt-7b-instruct-sharded