GGUF
Inference Endpoints
imatrix
File size: 8,992 Bytes
86e933e
 
 
 
 
 
 
 
 
 
 
8f739be
86e933e
0427e66
86e933e
 
 
 
 
db967e8
86e933e
1af9c31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86e933e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
---
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
---

<hr>

# Llama.cpp imatrix quantizations of nvidia/Llama-3.1-Minitron-4B-Width-Base

<img src="https://cdn-uploads.huggingface.co/production/uploads/646410e04bf9122922289dc7/p0fK4st3FF-Nd9oL9qSvd.jpeg" alt="Llama-3.1-Minitron-4B-Width-Base" width="60%"/>

Using llama.cpp commit [2e59d61](https://github.com/ggerganov/llama.cpp/commit/2e59d61) for quantization.

Original model: https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base

All quants were made using the imatrix option and Bartowski's [calibration file](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8).

<hr>

# Perplexity table (the lower the better)

| Quant   | Size (MB) | PPL     | Size (%) | Accuracy (%) | PPL error rate |
| ------- | --------- | ------- | -------- | ------------ | -------------- |
| IQ1_S   | 1158      | 81.7502 | 13.44    | 9.26         | 0.69197        |
| IQ1_M   | 1227      | 40.8601 | 14.24    | 18.53        | 0.31979        |
| IQ2_XXS | 1343      | 16.5816 | 15.59    | 45.67        | 0.11466        |
| IQ2_XS  | 1448      | 13.024  | 16.81    | 58.14        | 0.08768        |
| IQ2_S   | 1551      | 12.6045 | 18       | 60.07        | 0.08478        |
| IQ2_M   | 1643      | 11.0911 | 19.07    | 68.27        | 0.07374        |
| Q2_K_S  | 1654      | 11.0796 | 19.2     | 68.34        | 0.07646        |
| Q2_K    | 1755      | 10.3111 | 20.37    | 73.44        | 0.07045        |
| IQ3_XXS | 1794      | 9.342   | 20.82    | 81.05        | 0.0612         |
| IQ3_XS  | 1934      | 9.4403  | 22.45    | 80.21        | 0.06137        |
| Q3_K_S  | 2005      | 8.8949  | 23.27    | 85.13        | 0.05946        |
| IQ3_S   | 2017      | 9.0714  | 23.41    | 83.47        | 0.05851        |
| IQ3_M   | 2083      | 8.3352  | 24.18    | 90.84        | 0.0534         |
| Q3_K_M  | 2191      | 8.1839  | 25.43    | 92.52        | 0.05408        |
| Q3_K_L  | 2351      | 8.093   | 27.29    | 93.56        | 0.05352        |
| IQ4_XS  | 2419      | 7.774   | 28.08    | 97.4         | 0.05097        |
| Q4_0    | 2533      | 7.8479  | 29.4     | 96.49        | 0.05132        |
| IQ4_NL  | 2538      | 7.7697  | 29.46    | 97.46        | 0.05091        |
| Q4_K_S  | 2541      | 7.8125  | 29.49    | 96.92        | 0.05101        |
| Q4_K_M  | 2650      | 7.7376  | 30.76    | 97.86        | 0.05038        |
| Q4_1    | 2772      | 7.8155  | 32.17    | 96.89        | 0.05116        |
| Q5_K_S  | 3017      | 7.6649  | 35.02    | 98.79        | 0.05021        |
| Q5_0    | 3024      | 7.6407  | 35.1     | 99.1         | 0.0499         |
| Q5_K_M  | 3081      | 7.6283  | 35.76    | 99.26        | 0.04985        |
| Q5_1    | 3263      | 7.6439  | 37.87    | 99.06        | 0.04996        |
| Q6_K    | 3539      | 7.587   | 41.07    | 99.8         | 0.04945        |
| Q8_0    | 4581      | 7.5739  | 53.17    | 99.98        | 0.04941        |
| F16     | 8616      | 7.5721  | 100      | 100          | 0.04942        |

<hr>

# Llama-3.1-Minitron-4B-Width-Base

## Model Overview

Llama-3.1-Minitron-4B-Width-Base is a base text-to-text model that can be adopted for a variety of natural language generation tasks.
It is obtained by pruning Llama-3.1-8B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension.
Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose. 

This model is ready for commercial use.

**Model Developer:** NVIDIA 

**Model Dates:** Llama-3.1-Minitron-4B-Width-Base was trained between July 29, 2024 and Aug 3, 2024.

## License 

This model is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).

## Model Architecture

Llama-3.1-Minitron-4B-Width-Base uses a model embedding size of 3072, 32 attention heads, MLP intermediate dimension of 9216, with 32 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE). 

**Architecture Type:** Transformer Decoder (Auto-Regressive Language Model)

**Network Architecture:** Llama-3.1

**Input Type(s):** Text 

**Input Format(s):** String 

**Input Parameters:** None

**Other Properties Related to Input:** Works well within 8k characters or less. 
  
**Output Type(s):** Text

**Output Format:** String

**Output Parameters:** 1D

**Other Properties Related to Output:** None


## Usage
Pull requests
to support this model in Hugging Face Transformers are currently under review
([#32495](https://github.com/huggingface/transformers/pull/32495) and [#32502](https://github.com/huggingface/transformers/pull/32502))
and are expected to be merged soon. In the meantime,
please follow the installation instructions below:

```
# Fetch PR 32502
$ git clone -b suhara/llama-kv-channels --single-branch https://github.com/suhara/transformers.git && cd transformers

# Fetch changes from PR 32495
$ git fetch https://github.com/suiyoubi/transformers.git aot/head_dim_rope && git cherry-pick FETCH_HEAD --strategy-option theirs

# Install transformers
$ pip install -e .
```
We can now run inference on this model:

```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM

# Load the tokenizer and model
model_path = "nvidia/Llama3.1-Minitron-4B-Width-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path)

device = 'cuda'
dtype = torch.bfloat16
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)

# Prepare the input text
prompt = 'Complete the paragraph: our solar system is'
inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)

# Generate the output
outputs = model.generate(inputs, max_length=20)

# Decode and print the output
output_text = tokenizer.decode(outputs[0])
print(output_text)
```

## Software Integration
**Runtime Engine(s):**
* NeMo 24.05

**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere
* NVIDIA Blackwell
* NVIDIA Hopper
* NVIDIA Lovelace


**[Preferred/Supported] Operating System(s):** <br>
* Linux

## Dataset & Training

**Data Collection Method by Dataset:** Automated

**Labeling Method by Dataset:** Not Applicable

**Properties:**
The training corpus for Llama-3.1-Minitron-4B-Width-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance. 

**Data Freshness:** The pretraining data has a cutoff of June 2023. 

## Evaluation Results

### Overview
_5-shot performance._ Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):

| Average |
| :---- |
| 60.5 | 

_Zero-shot performance._ Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:

| HellaSwag | Winogrande | GSM8K| ARC-Challenge | XLSum |
| :---- | :---- | :---- | :---- | :---- |
| 76.1 | 73.5 | 41.2 | 55.6 | 28.7 

_Code generation performance._ Evaluated using [MBPP](https://github.com/google-research/google-research/tree/master/mbpp):
 | Score |
 | :---- |
 | 32.0 | 

## Inference

**Engine:** TensorRT-LLM 

**Test Hardware:** NVIDIA A100 

**DType:** BFloat16


## Limitations

The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. 

## Ethical Considerations 

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. 

Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). 

## References
* [Compact Language Models via Pruning and Knowledge Distillation](https://arxiv.org/abs/2407.14679)