Edit model card

Llama-3-2B-Base

Llama3-2b is a trimmed version of the official Llama-3 8B base model from Meta. It has been reduced in size to ~2 billion parameters, making it more computationally efficient while still retaining a significant portion of the original model's capabilities. This model is intended to serve as a base model and has not been further fine-tuned for any specific task. It is specifically designed to bring the power of LLMs (Large Language Models) to environments with limited computational resources. This model offers a balance between performance and resource usage, serving as an efficient alternative for users who cannot leverage the larger, resource-intensive versions from Meta.

Important: This project is not affiliated with Meta.

Uses

This model can be fine-tuned for a variety of natural language processing tasks, including:

  • Text generation
  • Question answering
  • Sentiment analysis
  • Translation
  • Summarization

Bias, Risks, and Limitations

While Llama3-2b is a powerful model, it is important to be aware of its limitations and potential biases. As with any language model, this model may generate outputs that are factually incorrect or biased. It is also possible that the model may produce offensive or inappropriate content. Users and Developers should be aware of these risks and take appropriate measures to mitigate them.

How to Use

To use Llama3-2b, you can load the model using the Hugging Face Transformers library in Python:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("andrijdavid/Llama-3-2B-Base/")
model = AutoModelForCausalLM.from_pretrained("andrijdavid/Llama-3-2B-Base/")
Downloads last month
550
Safetensors
Model size
2.36B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.