Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Description

  • Developed by: Writer
  • Model type: Llama
  • Language(s) (NLP): English
  • License: Writer

Uses

Direct Use

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
- sources:
  - layer_range: [0, 20]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [10, 30]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [20, 40]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [30, 50]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [40, 60]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [50, 70]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [60, 80]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [70, 90]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [80, 100]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [90, 110]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [100, 120]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [110, 130]
    model: mlabonne/Meta-Llama-3-225B-Instruct
- sources:
  - layer_range: [120, 140]
    model: mlabonne/Meta-Llama-3-225B-Instruct
merge_method: passthrough
dtype: float16
Downloads last month
4
Safetensors
Model size
225B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.