Update README.md
Browse files
README.md
CHANGED
@@ -3,36 +3,4 @@ library_name: transformers
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
-
# Jamba-
|
7 |
-
|
8 |
-
This is a pruned version of AI21 Labs' Jamba-v0.1 model that is ~25% the size of Jamba-v0.1.
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
## Model Details
|
13 |
-
Whereas Jamba-v0.1 contains 4 Jamba blocks, Jamba-Small contains only 1 Jamba block.
|
14 |
-
Jamba-Small's Jamba blocks follow the same structure seen in Jamba-v0.1, with a 1:7 ratio of attention-to-Mamba layers and MoE applied every 2 layers.
|
15 |
-
|
16 |
-
Jamba-Small's weights are initialized from various layers in the original Jamba-v0.1 model. For v1, the layer weights are mapped as follows (left is Jamba-Small layer number, right is Jamba-v0.1 layer number):
|
17 |
-
```
|
18 |
-
0: 0
|
19 |
-
1: 1
|
20 |
-
2: 2
|
21 |
-
3: 3
|
22 |
-
4: 4
|
23 |
-
5: 5
|
24 |
-
6: 30
|
25 |
-
7: 31
|
26 |
-
```
|
27 |
-
|
28 |
-
Note that no additional fine-tuning has been performed on this model. As such, its performance is exceptionally poor. This should not be used in production without additional training.
|
29 |
-
|
30 |
-
### Model Description
|
31 |
-
|
32 |
-
- **Developed by:** Nathan Brown (OxxoCodes)
|
33 |
-
- **Compute provided by:** Clemson Palmetto Cluster
|
34 |
-
- **Model type:** Joint Attention and Mamba (Jamba)
|
35 |
-
- **Language(s) (NLP):** English
|
36 |
-
- **License:** Apache 2.0
|
37 |
-
- **Original model:** [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)
|
38 |
-
- **Jamba paper:** [https://arxiv.org/pdf/2403.19887.pdf](https://arxiv.org/pdf/2403.19887.pdf)
|
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
+
# Jamba-13B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|