File size: 9,753 Bytes
efa2a7b
6772c6e
 
 
 
 
 
 
efa2a7b
6772c6e
 
 
a2767ea
cea02a2
2e0bd6e
 
999e3ca
2e0bd6e
ddd9df0
2e0bd6e
 
 
 
 
ddd9df0
 
 
 
 
 
 
2e0bd6e
ddd9df0
 
 
 
 
 
2e0bd6e
ddd9df0
 
a9f9d09
2e0bd6e
a9f9d09
ddd9df0
 
 
2e0bd6e
7033f8d
 
 
 
 
 
 
 
 
 
 
 
 
 
2e0bd6e
f135def
cea02a2
6772c6e
cea02a2
f135def
cea02a2
6772c6e
cea02a2
30046b1
 
 
 
 
 
 
 
 
 
 
cea02a2
6772c6e
cea02a2
6772c6e
cea02a2
6772c6e
cea02a2
136380c
cea02a2
6772c6e
cea02a2
6772c6e
 
 
cea02a2
f135def
cea02a2
6772c6e
 
 
 
f135def
6772c6e
 
 
 
f135def
6772c6e
 
 
 
 
 
 
 
 
 
 
 
f135def
6772c6e
 
 
 
 
 
 
 
3578d5b
5ca7f52
2286c11
a9f9d09
2e0bd6e
a9f9d09
2286c11
 
 
 
2e0bd6e
2286c11
5ca7f52
2286c11
3578d5b
5ca7f52
2286c11
a9f9d09
2e0bd6e
a9f9d09
2286c11
2e0bd6e
 
5ca7f52
ab5564a
6772c6e
 
 
 
 
 
 
 
 
 
cea02a2
 
 
 
 
6772c6e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
---
pipeline_tag: text-generation
tags:
- granite
- ibm
- lab
- labrador
- labradorite
license: apache-2.0
language:
- en
base_model: ibm/granite-7b-base
library_name: transformers
---
# Disclaimer and Requirements

This model is a clone of [**ibm-granite/granite-7b-instruct**](https://huggingface.co/ibm-granite/granite-7b-instruct) compressed using ZipNN. Compressed losslessly to 67% its original size, ZipNN saved ~5GB in storage and potentially ~30TB in data transfer **monthly**.

### Requirement

In order to use the model, ZipNN is necessary:
```bash
pip install zipnn
```
### Use This Model
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
from zipnn import zipnn_hf

zipnn_hf()

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="royleibov/granite-7b-instruct-ZipNN-Compressed")
pipe(messages)
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
from zipnn import zipnn_hf

zipnn_hf()

tokenizer = AutoTokenizer.from_pretrained("royleibov/granite-7b-instruct-ZipNN-Compressed")
model = AutoModelForCausalLM.from_pretrained("royleibov/granite-7b-instruct-ZipNN-Compressed")
```
### ZipNN
ZipNN also allows you to seemlessly save local disk space in your cache after the model is downloaded.

To compress the cached model, simply run:
```bash
python zipnn_compress_path.py safetensors --model royleibov/granite-7b-instruct-ZipNN-Compressed --hf_cache
```

The model will be decompressed automatically and safely as long as `zipnn_hf()` is added at the top of the file like in the [example above](#use-this-model).

To decompress manualy, simply run:
```bash
python zipnn_decompress_path.py --model royleibov/granite-7b-instruct-ZipNN-Compressed --hf_cache
```

# Model Card for Granite-7b-lab [Paper](https://arxiv.org/abs/2403.01081) 

### Overview

![Screenshot 2024-02-22 at 11.26.13 AM.png](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Screenshot_2024-02-22_at_11.26.13_AM.png)

### Performance

| Model | Alignment | Base | Teacher | MTBench (Avg) * | MMLU(5-shot) |
| --- | --- | --- | --- | --- | --- |
| [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) | RLHF | Llama-2-13b | Human Annotators | 6.65  |54.58 | 
| [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | Progressive Training | Llama-2-13b | GPT-4 | 6.15  | 60.37 * |
| [WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20  | 54.83 |
| [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 | 58.89 |
| [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | SFT | Mistral-7B-v0.1 | - | 6.84 | 60.37 |
| [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | SFT/DPO | Mistral-7B-v0.1 | GPT-4 | 7.34 | 61.07 |
| [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | SFT | Mistral-7B-v0.1 | - |  7.6** | 60.78 | 
| [Merlinite-7b-lab](https://huggingface.co/instructlab/merlinite-7b-lab) | Large-scale Alignment for chatBots (LAB) | Mistral-7B-v0.1 | Mixtral-8x7B-Instruct | 7.66 |64.88 |  
| Granite-7b-lab | Large-scale Alignment for chatBots (LAB) | Granite-7b-base| Mixtral-8x7B-Instruct | 6.69 | 51.91 |

[*] Numbers for models other than Merlinite-7b-lab, Granite-7b-lab and [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) are taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)

[**] Numbers taken from [MistralAI Release Blog](https://mistral.ai/news/la-plateforme/)

### Method

LAB: **L**arge-scale **A**lignment for chat**B**ots is a novel synthetic data-based alignment tuning method for LLMs from IBM Research. Granite-7b-lab is a Granite-7b-base derivative model trained with the LAB methodology, using Mixtral-8x7b-Instruct as a teacher model.

LAB consists of three key components:

1. Taxonomy-driven data curation process
2. Large-scale synthetic data generator
3. Two-phased-training with replay buffers

![Untitled](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Untitled.png)

LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.

Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data. Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.

![Untitled](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Untitled%201.png)

During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.

![intuition.png](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_intuition.png)

For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy. 

Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data. 

Our training consists of two major phases: knowledge tuning and skills tuning. 
There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
The second step uses replay a replay buffer with data from the first step.
Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.

![Untitled](model-card/Model%20Card%20for%20Merlinite%207b%2028cc0b72cf574a4a828140d3539ede4a_Untitled%202.png)

## Model description
- **Model Name**: Granite-7b-lab
- **Language(s):** Primarily English
- **License:** Apache 2.0
- **Base model:** [ibm/granite-7b-base](https://huggingface.co/ibm/granite-7b-base)
- **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)

### Use a pipeline as a high-level helper
```python
from transformers import pipeline
from zipnn import zipnn_hf

zipnn_hf()

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="royleibov/granite-7b-instruct-ZipNN-Compressed")
pipe(messages)
```

### Load model directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from zipnn import zipnn_hf

zipnn_hf()

tokenizer = AutoTokenizer.from_pretrained("royleibov/granite-7b-instruct-ZipNN-Compressed")
model = AutoModelForCausalLM.from_pretrained("royleibov/granite-7b-instruct-ZipNN-Compressed")
```

## Prompt Template

```python
sys_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."

prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
stop_token = '<|endoftext|>'
```

We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions. 



**Bias, Risks, and Limitations**

Granite-7b-lab is a base model and has not undergone any safety alignment, there it may produce problematic outputs. In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.