Mistral 7B V0.1
Implementation of Mistral 7B model by the phospho team. You can test it directly in the HuggingFace space.
Use in transformers
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer
tokenizer = LlamaTokenizer.from_pretrained("phospho-app/mistral_7b_V0.1")
model = LlamaForCausalLM.from_pretrained("phospho-app/mistral_7b_V0.1", torch_dtype=torch.bfloat16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.