Edit model card

Model Card for Model ID

This is a quantized version of Llama 3.1 70B Instruct. Quantized to 4-bit using bistandbytes and accelerate.

  • Developed by: Farid Saud @ DSRS
  • License: llama3.1
  • Base Model: meta-llama/Meta-Llama-3.1-70B-Instruct

Use this model

Use a pipeline as a high-level helper:

# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="fsaudm/Meta-Llama-3.1-70B-Instruct-NF4")
pipe(messages)

Load model directly

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("fsaudm/Meta-Llama-3.1-70B-Instruct-NF4")
model = AutoModelForCausalLM.from_pretrained("fsaudm/Meta-Llama-3.1-70B-Instruct-NF4")

The base model information can be found in the original meta-llama/Meta-Llama-3.1-70B-Instruct

Downloads last month
835
Safetensors
Model size
37.4B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fsaudm/Meta-Llama-3.1-70B-Instruct-NF4

Quantized
(81)
this model

Collection including fsaudm/Meta-Llama-3.1-70B-Instruct-NF4