Custom GGUF quants of Metaβs Llama-3.2-Instruct's finetunes, where the Output Tensors are quantized to Q8_0 or F32 and the Embeddings are kept @F32
Joseph
Joseph717171
AI & ML interests
None yet
Recent Activity
reacted to
cfahlgren1's
post
with π₯
about 7 hours ago
reacted to
cfahlgren1's
post
with π
about 7 hours ago
New activity
about 8 hours ago
arcee-ai/SuperNova-Medius
Organizations
Collections
3
Custom GGUF quants of Llama-3.1-8B-Instruct fine-tunes, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. π§ π₯π
-
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 266 β’ 1 -
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 577 β’ 2 -
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_base_Embeds_Initialized_dtypeF32-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated β’ 407 β’ 1
models
29
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
577
β’
2
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
266
β’
1
Joseph717171/Imatrices
Updated
β’
2
Joseph717171/Llama-3.1-SuperNova-Lite-14B
Text Generation
β’
Updated
β’
12
Joseph717171/SuperNova-Lite-Hermes-3-Llama-3.1-8B_TIES_with_base_Embeddings_Pre-Initialized-dtypeF32
Text Generation
β’
Updated
β’
51
β’
2
Joseph717171/Llama-3.1-8B-InitializedEmbeddings_with_Hermes-3
Text Generation
β’
Updated
β’
13
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_Base_Embeds_Initialized_to_Special_Instruct_Toks_dtypeF32
Text Generation
β’
Updated
β’
109
β’
1
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_base_Embeds_Initialized_dtypeF32-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
407
β’
1
Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
Text Generation
β’
Updated
β’
2.65k
β’
8
Joseph717171/Llama-3.2-3B-Instruct-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
β’
176
β’
1
datasets
None public yet