Edit model card

CODE

LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct

Repository Overview

This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.

Training Strategy

  • Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.
  • Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.
  • Note: During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following S2-Wrapper.

Key Components

Training Data

Download It As

git lfs install
git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT-S2

Contributions

Contributions are welcome! Please 🌟 our repository LLaVA++ if you find this model useful.


Downloads last month
34
Safetensors
Model size
8.36B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT-S2