Edit model card

The provided model is a multi-layered folded model, using multiple layers from the base Llama3 8B Instruct base, to increase its size to 21B parameters using mergekit. Rather than just using passthrough, task arithmetic was used. Further fine tuning was performed to ensure the model's weights and inference should be rebaselined.

q3_k_s GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q3_K_S-GGUF

q4_k_m GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q4_K_M-GGUF

q6_k GGUF: https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q6_K-GGUF

q8_0 GGUF: https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q8_0-GGUF

Only minor follow-up inference testing was performed after training.

Uploaded model

  • Developed by: sydonayrex
  • License: Llama 3
  • Finetuned from model : sydonayrex/AI-Llama3-21B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Llama image generated by Meta AI.

Downloads last month
13
Safetensors
Model size
21.3B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for sydonayrex/Blackjack-Llama3-21B

Finetuned
this model
Quantizations
2 models

Dataset used to train sydonayrex/Blackjack-Llama3-21B

Collection including sydonayrex/Blackjack-Llama3-21B