Edit model card

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

(model card is repeated due to open llm leaderboard length requirements)

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.

it's based on the 3t base model (not chat tuned).

not extensively tested.

enjoy!

Downloads last month
976
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for optimum/mistral-1.1b-testing

Quantizations
1 model